I'm happy to see that more runtimes are maturing and getting use on production cases... I can't wait to see and show what the future entails for WebAssembly on both the server side and the browser!
Keep up the good work. Also I'd like to use this message to congratulate other runtimes that I'm excited about (apart from Wasmer, of course!): Wizard Engine [1], Wazero [2] and Lunatic [3].
Instantiating a runtime in microseconds is deeply exciting!
Though after skimming through the docs, I'm still left wondering a few things:
- JS is notably missing from the list of languages supported on the front page. But I see mentions of a Spidermonkey.wasm in the blog post. Is running JS on top of wasmtime in production a realistic prospect today? If so, where can I read more? (mainly interested in this for the instantiation time benefits, though maybe all/most of that will be negated by the embedded JS engine?)
- How should I go about building a typical web service on top of wasmtime? Can wasmtime itself handle network requests/connections or would I need to build the web server in some other host language and pass request data to wasmtime modules? Haven't been able to find anything in the docs about this.
- What would it take to build a multitenant web service where customer code is isolated using wasmtime, like the one like described in the post?
These are good questions! Here's some answers from the corner of the world I know best as a Wasmtime contributor at Fastly:
1. Spidermonkey.wasm is the basis of Fastly's JavaScript on Compute@Edge support. We have found it to be faster than QuickJS. The source code is here: https://github.com/fastly/js-compute-runtime.
3. Fastly's multi-tenant platform is closed source, but our single-tenant local development platform, which also uses wasmtime under the hood as well, is open source: https://github.com/fastly/viceroy. It isn't a big leap to make viceroy multi-tenant: Wasmtime provides everything you need, and all Viceroy would have to do is dispatch on e.g. HTTP host header to the correct tenant. Our multi-tenant platform is closed source because it is very specialized for use on Fastly's edge, not because the multi-tenant aspect is special.
Nice, the fact that Fastly is comfortable betting on Spidermonkey.wasm is the vote of confidence I needed to dig in! Also really love seeing how much Fastly is giving back to open source!
> 1. Spidermonkey.wasm is the basis of Fastly's JavaScript on Compute@Edge support.
How difficult is staying up to date with SpiderMonkey's latest or pre-release upstream versions? Do have to maintain your own SpiderMonkey patches to make upstream work for you?
We worked with Igalia to upstream the SpiderMonkey patches required to build for wasm32-wasi. We point to a fixed version (see: https://github.com/fastly/spidermonkey-wasi-embedding) which we haven't updated for a minute.
> - JS is notably missing from the list of languages supported on the front page. But I see mentions of a Spidermonkey.wasm in the blog post. Is running JS on top of wasmtime in production a realistic prospect today? If so, where can I read more? (mainly interested in this for the instantiation time benefits, though maybe all/most of that will be negated by the embedded JS engine?)
Shopify and others use QuickJS as their JS engine of choice. See https://github.com/Shopify/javy as a starting point. The real benefit is allowing authors of plugins to write JS and not AssemblyScript, not any performance or instantiation time benefits.
> - How should I go about building a typical web service on top of wasmtime? Can wasmtime itself handle network requests/connections or would I need to build the web server in some other host language and pass request data to wasmtime modules? Haven't been able to find anything in the docs about this.
There are a lot of choices for this. None I would consider mature, but some leads:
They handle the passing of data between host and guest for you. Your module, written in a language that compiles down to wasm, is evaluated against the request. They make host functionality, such as the ability to make other network calls, available to your module as needed.
> - What would it take to build a multitenant web service where customer code is isolated using wasmtime, like the one like described in the post?
See suborbital linked above. Not much. This is the primary current use case for server side wasm.
It is indeed early days, but we're pretty confident that the opportunity is real. Take webhooks. With webhooks, your customers have to bring and manage their own infrastructure. That's a pain in the ass. Everywhere that supports webhooks should also support Just Running My Damn Code. Like with GitHub Actions.
How does GitHub do it? Simple: They're owned by Microsoft. GitHub can bury the problem in an ungodly deluge of cheap virtual machines from their sister company, Azure.
You aren't owned by Microsoft, so how will you Just Run Some Code? You're going to use WebAssembly. Specifically, in a year or two, you're going to integrate some sort of off-the-shelf commodity product that's powered by WebAssembly.
Suborbital is one shot at what that future might look like. We'll be talking more about this in the next few weeks. And it's not just us: a lot of the serverside WebAssembly folks will be at Cloud Native WASM Day at KubeCon next month... say hello if you're there!
In what use cases can user-provided wasm realistically replace webhooks? It seems to me that in plenty of cases, you'd end up needing something like webhooks anyway. For instance, I use Stripe, and I implement their webhook to handle completed checkouts. My webhook implementation needs to update my database, send email using my email service provider credentials, etc. I doubt that running my own wasm code on Stripe's infrastructure would make things simpler. But I'd like to learn more on use cases where user-provided wasm really would be better.
I'm sold on the future of wasm. What really still bothers me is the effort and hoops I have to jump through to actually leverage it. Networking, just fix networking so I can drop in most things and it would be great. The fact that I have to rewrite the majority of my network access to do any external IO is painful but I get that containers took more than a decade after Google's use to get there too.
You get lots of benefits, but then you cannot use any C system calls which most code relies on (networking, disk, GUI) etc.
It looks like runtimes are going to create message passing interfaces to access these C system calls.
Wouldn't it be easier just to create better sandboxes for the old code rather than create a WASM runtime with new message passing interfaces to all the old C system functions?
Stay tuned for some updates from Wasmer regarding networking, I'm super excited on what's coming next for full networking apps running on Wasm... can't wait to share it!
It's a way of running WASM code, which can be anything from anywhere, safely. It only gets access to the folders you let it access as a capability. It gets no access to anything else.
So, no matter how evil, or confused, the program is, you aren't risking your entire system. It's the best part of capability based security.
This is an advertisement, not an answer. Wasmtime is a way of running WASM bytecode, which is a little virtual machine (of the Java, rather than QEMU, type). You can run your WASM code on it and because it runs in the runtime you can, at least in theory, sandbox it by controlling what it has access to. If your runtime and the computer it is running on is perfect (which happens to be an unsolved problem, sadly) then you can run anything inside of the runtime and it can only do what you give it access to. In practice the runtime is not perfect and bugs will let very malicious code escape, but overall the security of running things in such a runtime is usually much better than doing so directly, at the cost of some performance.
Java Security Manager has a rich history of leaking capabilities to code that should not have them. As has practically every capability-based system ever.
Capabilities are no Silver Bullet. They work exceptionally well with small teams, but like memory leaks, reachability decisions tend not to scale to very large teams, and people start exposing information for a feature without being able to trace the consequences of having done so.
I could potentially see someone recasting this story as a parable against shared state, but I'm not convinced it's the sole cause or that you could have one without the other. I think it is true that they share a problem space, but that's neither a particularly brave nor illuminating statement.
Do you have a favorite alternative to the capability model that you feel is superior? I somewhat casually follow these discussions and I'm always looking for new ideas (or old ones that actually work better in some cases).
The weird thing about the security APIs in Java is that it contained both capabilities and an ACL model with a sort of hierarchy to it, both of inheritance and being able to run a function in a scope. I got to use it for real on a project and if you overlooked the extreme clunkiness of trying to capture a set of rights as configuration, it wasn't that bad.
So I could say, do that again but with a better way to define rules programatically instead of a priori. Multi-tenant, multiple roles, or a Cartesian product of the two sort of demand a little bit of bespoke rules engine work.
Except Java has support for shared memory between threads, whereas WASM is single-threaded with message passing. Don't expect e.g. to implement a high-performance multi-threaded database in WASM anytime soon.
"Write once run anywere" heard that many times. No sandbox is going to give you safety. Spectre and Meltdown are great examples of that. It is the matter of time and popularity. Cryptojacking and malware is already the problem[0].
I would add that it can be useful to have isolated runtimes even if you're not a PaaS. Say you need to parse an obscure file format that can be uploaded from untrusted users (and nowadays, that's all users). You've got some ancient C or C++ source code for the parser, but it hasn't been vetted for security or denial-of-service issues. Compile it to WASM, run it in an isolated runtime, and you can be confident that it won't be able to escape its sandbox while also minimizing startup times and using significantly fewer resources than needing a container for each invocation.
> You've got some ancient C or C++ source code for the parser, but it hasn't been vetted for security or denial-of-service issues. Compile it to WASM, run it in an isolated runtime, and you can be confident that it won't be able to escape its sandbox
Personally I’d want to be very cautious about this. It’s probably more secure than just running this untrusted binary directly on a server full of user uploaded content but I don’t know what level of confidence is should have in wasmtime not having any container escapes possible.
Yes. Consider that Spectre allowed reading browser memory that wasn’t exposed to JavaScript from within the JS sandbox. Something similar probably would have been possible with WASM, if Spectre hadn’t already been found and mitigated.
I love capability based limits like this (and in Deno), but they’re not a panacea.
> For those of you who have been around for the better part of the past couple of decades, you may notice this looks very similar to RFC3875, better known as CGI (The Common Gateway Interface). While our example here certainly does not conform to the specification, you can imagine how this can be extended to turn the stdin of a basic 'command line' application into a full-blown http handler.
Anything that wants to run arbitrary, possibly untrusted user-provided code, with good speed and minimal startup time. The big use cases are AWS Lambda-like things (whether that's serving web requests or reacting to event queues), as well as plugin systems for software.
I wrote a (very surface level) comparison on this last year[1]. If you already have V8 in your environment, it's probably worth using V8, but if you just want to load a wasm interpreter into some Rust code, wasmtime is a much nicer interface to work with and more lightweight.
The Bytecode Alliance is a nonprofit group with members from all over the industry working to provide what is ostensibly a reference implementation of a WebAssembly runtime w/ full WASI support. Wasmtime is their implementation.
They are far from the only implementation, though. You can find links to other runtimes in other comments here already.
I have the same questions still, after reading all the material and also researching between papers and conference talks.
My takeaway so far is: it’s a faster, more flexible, and lighter JVM-like thing.
As someone who had written Java Applets from back in the day on the UI side, and written plenty of server side and seen a lot of FaaS successes and failures (more the latter than the former), all the features and use cases map 1:1 - so not really new use cases. Just way better design and implementation (which may make architectures actually end-user usable), w/o money-ed interest.
The article shows startup time from enter to output being very, very fast.
Having rust, go, c, c++, being able to be transpiled to wasm byte code is already more flexible than jvm. Why isn’t there a popular C front end to jvm? -> there’s the flex. Also wasm runs in browser w/o loading applets.
Worry about bloat when the project is more mature; there’s no bloat right now and worth exploring.
Latest Javas can run hello world in about 50msec or less. If you AOT compile the app it can do so a bit faster than an app written in C, believe it or not. This is not really a competitive advantage for wasmtime.
Actually, wasm is less flexible than the JVM because with GraalVM/Truffle you can run:
1. WASM
2. LLVM bitcode
both on the JVM, alongside all the other languages it can do like Python, Ruby, Clojure, JavaScript, Kotlin, Smalltalk, R etc. Therefore you can run Rust, C and C++ on the JVM. Don't think you can run Go, but this is still way more languages on the JVM than WASM.
You ask why there's no popular C frontend for the JVM. It's because nobody really cares about running C on a VM except for people targeting Chrome/Safari. You can do it with GraalVM and it has some uses for running C language extensions to scripting languages, but otherwise is a bit of a curiousity. Usually if you want to call a C library it's because it's an operating system API or because you want to do things that a VM wouldn't do well anyway e.g. stuff with inline native assembly.
To get back at a higher level - I’m not attacking the JVM - a lot of my career had been using it to much success.
And the large benefits of wasm are still in the making/a bit unclear. In my original post, I question why this runtime is useful at all, as the jvm already does a lot of the same things.
However, GC by default is not in wasm right now, and that enables non-GC languages to be ported over, while it makes no sense to port those languages to the jvm. And a lot of the important stuff is written in those non-GC languages; game engines, gpu compute usage for AI, and others I probably don’t know.
The JVM is old ~ still good/great for backend server side things, but not great for fast startup (50ms is still 10,000’slower than wasmtime’s 5 micro second startup), non-gc stuff, and non-technical, impatient, end user browser stuff.
Hence worth the exploration - and if it means jvm being replaced 10 years later, so be it (maybe there’ll be a Java to wasm port + gc for wasm by then).
You can actually run non-GC languages on the JVM because it does expose a manual memory allocator. That's how the WASM and LLVM support work. In that case the GC gets out of the way (or is used only for Java objects). Of course in that case a lot of the benefits of the JVM aren't there, but if you need to do JIT compilation+manual memory management then it can make sense.
The real question I think is, if it weren't for particular technical choices by the browser people, would anyone care about JIT compiling C? Probably not. We know how to sandbox C/C++/Rust without a JITC, nacl and other initiatives have proved that. Portability, so what - there's really two CPU archs that matter and new ones don't come along very often. Cross compilers work. The GraalVM guys have found a good reason to JITC of C for language interop and interpreter extensions but that's pretty special case.
It is strange to me to call it 1.0 without seamless threading support and while wasm & wasi still have standards to work out.
Multithreading is a basic operation in everything I build right now. It allows me to make a highly responsive UI while doing massive computations without any user experience impact. It lets me make services which can handle thousands of requests a second. And with Rust, this all comes ergonomically. At this point, I can’t think of a single solution I would implement without doing some form of this.
While I could treat a wasm application as a client server model (have one module for interfacing / managing, have X for processing where X is the number of available cores) then share memory between them, this creates a lot of boilerplate and makes the code less ergonomic, and can drastically increase friction. I don’t want to have to hack threading into an application with so much manual setup, and I don’t like when a runtime forces architecturally decisions because it lacks a basic capability.
Anyway, congrats on the release. I will look forward to 2.0 of these runtimes.
They seem to adopt a rapid iteration model closer to browser rather than typical language runtimes. So a browser 1.0 is nothing close to finished. I personally have no problem on it. But I think it is indeed strange that they sound so serious about correctness while constantly rolling new major versions.
Nothing closed to finished is no joke. Been trying to share memory between a main-thread wasm controlling UI and worker-thread wasm doing heavy compute. One problem after another only to find mutexes are janky / non-operable due to main thread wasm not being allowed to block?
So need to come up with some hacks so my code can do basic things like locks with shared memory. Not sure what these folks are thinking but I’m looking forward to a day when one codebase truly works everywhere.
Congrats on the 1.0 release! Wasmtime has a world class team of developers and I truly believe that WASM will play an important part in many industries. I'm pretty excited for what's to come. We've barely scratched the surface yet.
Same as with Node. Take some browser tech, remove the browser, find out it's useful on its own.
Browsers are really good incubators for fast, secure runtimes. Every time you open a new tab, that engine has to run untrusted, third-party code provided by the website. And it has to do start up so quickly that you don't perceive it. Turns out those properties are useful on the server, too.
I’m not an expert, but you could drop the same WASM binary on servers of any architecture, provided the runtime is installed. I also assume (hope) this runtime is much lighter weight than e.g. Node or Ruby or Python or whatever is popular.
It also seems nice because of the sandbox aspect, similar to how I’d rather run a random tool in my browser than give it unfettered access to my local resources by default.
Doesn't seem as though I can run the QJS REPL on Windows with this yet. I thought the entire point of these sorts of projects was serving as a cross-platform runtime, but Wasmtime doesn't actually accept key input for the REPL, and Wasmer errors out with an ugly error.
thread 'main' panicked at 'not implemented: VirtualFile::poll is not implemented for non-Unix-like targets yet'
Guess I still just have to wait, but seems weird to call it 1.0.
Symlinks, hardlinks, errno, etc. It's not surprising that it's better supported on Unix than other platforms.
And yeah I agree it's disappointing they tacked such an obviously platform-biased API on to something that is supposed to be platform independent. I guess it was the pragmatic thing but still...
> And yeah I agree it's disappointing they tacked such an obviously platform-biased API on to something that is supposed to be platform independent. I guess it was the pragmatic thing but still...
What's the alternative? If you want an I/O system interface into WASM, something POSIX-like is probably the sanest choice one could make. Portable is literally in the name. In fact I'd say POSIX the least obviously biased system interface.
Is there some other syscall interface out there which implementS files, sockets, and clocks, which is simpler than POSIX?
In fact WASI seems quite stripped down from POSIX.
What you linked to is WASI-libc, which is a C standard library implementation on top of WASI - which itself is more low-level and less POSIX-like. https://github.com/WebAssembly/WASI
Fortunately the idea of WASI is for it to be modular. It makes sense for them to have the APIs required to make porting existing POSIX-ish-compatible apps easy, but there are also WASI modules (I forget the proper name) in the works for much more general IO objects etc. that better abstract away the underlying OS if you're willing to invest in that.
So I guess you could summarise it as: backward compatible-ish first, and then figuring out what a neater, more idiomatic world would look like second.
It's a pragmatic approach to bootstrapping the ecosystem, and there's no need for the POSIX-alike parts to hold anyone back, at least long term. Pretty soon you will be able to write apps that target WASI that end up reading and writing files in practice without the app even knowing what a "file" is (the runtime can just hand it an abstract IO object). That's a step beyond just not knowing about things like symlinks, and seems to me like a more "web-flavoured" future that can work even for CLI/desktop apps.
A standard Rust program has full access to any filesystem, network, etc resources that the user running the program has. WASM sandboxes these resources by default, only giving the program access to exactly what you explicitly allow at runtime.
Performance is going to be fast enough in most cases, you get the extra sandboxing, and it runs multiplatform. I'm not speaking from experience but just my understanding, so I may be incorrect.
Congratulation! We're following this project for long time, considering embedding it in couple of projects.
The aim is to use it as a lightweight sandbox to replace docker.
But, it still feels like the performance impact is too high compared to docker.
The perf blogpost actually uses SpiderMonkey-compiled-to-Wasm as a benchmark rather than a comparison (peer) of Wasmtime. However, there are some good comparisons done by a third party in this paper:
I don't understand how we got here. A cross platform byte code that runs in some kind of emulated machine was done ages ago and called Java. Or C#. And they seem not-exactly-immune to security problems. Then we did containers and they were better. Then we did containers running in a lightweight VM and they were better too.
Surely a compiled language on a unikernel (as in no kernel) in a lightweight VM is the ultimate good result for secure + fast + scales? Wouldn't this buy you all sorts of hardware acceleration goodies like direct access to network hardware a la DPDK and no switching between user and kernel modes?
> A cross platform byte code that runs in some kind of emulated machine was done ages ago and called Java.
Java does not emulate a machine. It abuses the term "virtual machine" for what is actually a compiler back-end. A particular difference to WASM is that the JVM's idea of how memory works is totally different from both WASM and real machines.
> And they seem not-exactly-immune to security problems.
This has nothing to do with the JVM but with the standard library, which tries to implement security for very-high-level operations. WASM is closer to browsers, which already implement security in a much more robust way.
> Then we did containers and they were better.
If you think that containers are the way to run native code in the web, I'd suggest you build a proof-of-concept. Because to me that doesn't sound like a great idea at all.
Containers are running native code. iPhones, for instance, seem happy to run containers. And it's my (perhaps shaky) understanding that the multi-process model for browsers involves running processes with reduced privileges - containers, in other words, even if not specifically "docker" ones.
AWS lambda is built on a VM/container hybrid IIRC.
And VM's, again, have direct access to hardware. This is what this SR-IOV business is all about and how we (RoCE) can have remote disks across networks that run even vaguely well.
Virtual machines have been running arbitrary code for a long time now, I think this is pretty much a done deal.
Java and C# to my understanding are intended to be run in an untrusted way and can cause segfaults or take over the surrounding process (with JNI or C# unsafe). With something like wasmtime I can have untrusted user code in the same process that I have. This means I can do things like have really low-latency interconnect or start running a user's code in microseconds.
Running untrusted java code sounds like a nightmare. And saying you can do something in java if you've "patched the JDK appropriately"... How is that even comparable to a 100% sandboxed by default vm and runtime?
You patch the JDK because it doesn’t make sense to try to access the filesystem if there is no filesystem, lol. The JVM is sandboxed, it’s the APIs you give it that let it break out.
I'm happy to see that more runtimes are maturing and getting use on production cases... I can't wait to see and show what the future entails for WebAssembly on both the server side and the browser!
Keep up the good work. Also I'd like to use this message to congratulate other runtimes that I'm excited about (apart from Wasmer, of course!): Wizard Engine [1], Wazero [2] and Lunatic [3].
The future is bright in Wasm land :)
[1] https://github.com/titzer/wizard-engine
[2] https://github.com/tetratelabs/wazero
[3] https://github.com/lunatic-solutions/lunatic