Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
“WebAssembly runtimes will replace container-based runtimes by 2030” (changelog.com)
113 points by ingve on June 22, 2023 | hide | past | favorite | 169 comments


It seems like they're conflating the need to run a specific software stack with the need to do configuration management in general. Configuration management doesn't go away with WebAssembly as they're describing. You're just making fatter statically-linked processes that boil down to interpreters plus source code. People are already statically linking with EG GoLang or .NET Core anyway.

So I would guess that the future is static linkage with control groups moreso than it is a specific JavaScript/TypeScript software stack running in a specific runtime. Frankly the way we're shipping dependencies in what are basically checksummed tarball blobs is an echo back to static linkage at a time when most OS platforms have gone very deeply down the dynamic linkage rabbit hole to the degree where everyone is shipping dynamic libs instead of statically linking those libs into their executable.

Control Groups are supposed to provide process isolation but are being used to solve this configuration management issue as well because a particular company decided to use chroot in this way.


What company decided to use chroot this way? I'm going to guess its Microsoft, but I think thats not either entirely correct or I'm off base


I read it as a reference to Docker. The idea being that instead of actually static linking our dependencies, we use dynamic linking to bundle them together as a big blob, use chroot/overlayfs to make it a filesystem, and then dynamically link them at runtime. It's kind of weird when you think about it.


That's exactly what I'm referring to. The original design for cgroups wasn't to do this type of configuration management but rather to provide process isolation and control access to system resources at the base level. That's one of the reasons you're able to set per-container CPU and memory limits.

Docker is basically abusing this system for configuration management and providing some process isolation as a byproduct, because it turns out configuration management is a much more widespread problem than resource control.


It does makes sense. glibc depends heavily on dynamic linking partly because it's pluggable. For instance DNS resolution supports plugins. Ditto for many other useful bits of software. If you think of a container as a frozen combination of apps and plugins, then the value becomes clearer: the software in the container can be recombined in many ways without recompilation or re(static)linking, the container itself is static.


Google is famous for statically linking everything.


I don't think its only static linking that is the source of this practice right?


“Namespaces” in linux provide a lot of the isolation that people associate with containers

With a mount namespace you can pivot root


What does that even mean?

When I use a container, I get all the stuff a Debian distro brings:

    A file system
    Process management
    My favorite database (SQLite)
    My favorite webserver (Apache)
    My favorite runtime (CPython)
    A gazillion helpful applications and libraries (Like imagemagick)
    Tons of convenience glue (like /etc/hosts)
I get all of that in a reproducible, isolated little universe.

How does a "WebAssembly runtime" provide me with those?


You can have all of these things on WebAssembly. You just compile them.

I think the view here is WebAssembly as another architecture. Instead of compiling for x86 or ARM (or both) you just compile to WASM and it can run anywhere.


Compile what? A whole Debian distro with a Linux kernel and all 7000 packages?


Compile glue code to the new kids favorite web stack:

  Their favorite database (SQLite)
  Their favorite webserver (Spin)
  Their favorite runtime (CPython)
Hopefully some imagemagic, but definitely none of the linux clutter


Probably no kernel, but everything else.


So what does "everything else" do if it has no kernel below it?

What does curl do if it cannot talk to a network stack?

What does "touch" do if there is no filesystem to touch?

What does "dd" do if there are no block devices?


Think of a docker container, they don't include a kernel. There is still a kernel somewhere obviously.


The issue is not "no kernel", the issue is "no support tools in your runtime"


> You just compile them

I can't tell if you're being sarcastic or not.


I'm not. If the article's predictions are correct then all of these will likely compile for WASI or whatever the target at the time is as easily as you can compile Debian for x86 or ARM today.


So... the problem is that there are too many standards, so they'll introduce one more standard that will solve the problem?


Well, the obvious target would be running Docker in the browser.


I could see this greatly benefitting python in particular. Compile cpython and all the dependencies against WASM/WASI, bundle WASM wheels, and wrap the whole thing up in a single .wasm binary. Python portability woes are basically solved, provided your environment has the appropriate WASI runtime.


Ironically this is solved already by the latest release of Graal, one of WASM's competitors:

https://medium.com/graalvm/whats-new-in-graalvm-languages-16...

You can now distribute Python applications or libraries as standalone binaries or JAR files without any external dependencies. The Truffle framework on which GraalPy is built, and the Sulong LLVM runtime that GraalPy leverages for managed execution of Python’s native extensions, enable virtualization of all filesystem access of Python applications, including those to the standard library and installed packages.

So now you can build a standalone executable as follows:

graalpy -m standalone binary - module my_script.py - output my_binary

or a JAR file that runs on GraalVM and includes GraalPy as follows:

graalpy -m standalone java - output-directory MyJavaApplication - module my_script.py

The first form creates a fast starting native binary. The second is a portable bytecode standalone file. So this WASM dream is delivered already, just by a different group of people.

Now, GraalPy is a different implementation to CPython so not all Python apps run on it yet. But obviously any WASMPy would have the same issue.


Wow, that's pretty awesome!

I don't know if WASMpy would be a completely different implementation, or just CPython compiled against a WASM target. Probably the latter. Which has already been done.

https://github.com/jupyterlite/jupyterlite


Pyodide looks like an alternative implementation, at least in spirit.


But my Python code does not run in a vacuum. It gets executed by Apache, writes to a file system, makes use of a gazillion of libraries and programs that come with Debian.

What would the Python "requests" module do if it has no Linux kernel below it to pass the requests to a network stack?


Your python code is executed by Apache? You mean like WSGI? Well either way, the bytecode alliance is working on things like fork/exec.

> What would the Python "requests" module do if it has no Linux kernel below it to pass the requests to a network stack?

Why does requests need a linux kernel? It's cross-platform. Strictly speaking it only needs the appropriate I/O calls. WASI provides a system interface to do I/O, network, files, etc.

https://wasi.dev/

https://github.com/WebAssembly/wasi-sockets


https://wasi.dev/ allegedly. Development seems slow though.


Wait until 2030.


wasm doesn't give you a vm, it gives you a process


To be more precise, if you wanted an isolated process, docker already gives you that. Along with the overhead of spinning up a new process for every request you want to handle.

What wasm people seem to be interested in is to eliminate the process (and hence a language runtime) spawning overhead, and directly execute the specific function of your application..

"Write your serverless function in any wasm compilable language and our multi tenant runtime can host 1000s of such functions and invoke them without much delay"..

Of course, that only makes sense for programs which are written in wasm compilable language. When they do crazy things like running js scripts on top of a quickjs interpreter compiled to wasm to run on top of a wasm interpreter "for portability and safety", it gets weird to see the benefits..


The trend is to put more and more things into browsers and presumably WASM will have access to all that. But, yah, seems like a long way off.


By same logic why use containers ? You can just ship VM images.

It's basically the same thing as using a language runtime vs using a generic container for Lambda, except WASM would be lower level (you would need to ship language runtime on top of WASM but that's probably order of magnitude less overhead than a container)

Containers feel like a clever hack to get existing apps packaged up and reduce the overhead of VMs, but I'd much rather deploy runtime/app code only.


> Containers feel like a clever hack to get existing apps packaged up and reduce the overhead of VMs

That's because that's exactly what they are, and that's why this claim is weird. Containers do not solve the same problem as WASM. No one is going to be taking Postgres and compiling it to WASM. They're not going to do that for redis or memcached or any of the other innumerable services that live happily inside of a container and need low-level access to a Unix system.

We may very well find that WASM replaces containers as to go to for bundling a typical web app, especially one built with a compiled language rather than one that already has a runtime. But we will still use containers for the things that containers are good at.


I'd argue containers aren't that good at databases or caches (production ones anyway, fine for development) and containers are specifically targeted at app deployments.

Containers just add overhead for stuff like memcached and complicate database management.


That's a reasonable argument to make (although I think it depends on the context), but I see no reason to believe WASM would be any better. And the fact remains that a lot of people do use containers quite successfully for these things. I don't believe those same people would turn to WASM.


Someone already did: https://news.ycombinator.com/item?id=32498435

Overall the timeline doesn't seem implausible to me.


Okay, I'll amend my statement: no one should be using a WASM-powered Postgres as their production DB.

Right now compiling to WASM is sort of the equivalent of "but will it run Doom?" People are doing it for everything, and it's a fun experiment to stretch WASM. I could see it being useful in making some things available in the browser that otherwise couldn't be. But that doesn't mean it's going to replace containers in their rightful place. It'll be another option, and unless you're trying to solve one of the few problems that WASM was designed to solve (like running in the browser), it will be the wrong one.


> Okay, I'll amend my statement: no one should be using a WASM-powered Postgres as their production DB.

Realistically containers neither. DBs are infrastructure best left to PaaS or dedicated VMs you can provision/manage better.


Why won't they do these things? At the very least, you will only need to provide one binary, rather than one per architecture.


Because the burden of compiling a binary per architecture is microscopic when you're dealing with something like postgres or redis or memcached. Someone sets up a CI pipeline that compiles binaries for the supported platforms. The pipeline produces the required binaries with each release.

These projects care far more about performance than they do about ease of compilation or even of installation, because they're compiled once and run trillions of times in performance sensitive contexts. As I noted in reply to a sibling, I could see it being useful to run in a browser as a sort of learning playground, but that is not how people are going to deal with their production DBs—the bottleneck represented by WASI is not going to be worth it.


Really? VM images take longer to spin up, both at the infra level and the devex level


Just like docker image vs specialized language runtime ?


Sure, but for many langs hermetic config is an issue, you want to be relatively sure what your devs are building against is what gets deployed.


So you'll develop directly on the wasm runtime locally which should be a platform abstraction ?

The way I see it VM -> Containers -> WASM is a logical progression in the same direction from the perspective of deploying applications.


I mean i don't think you get it. Wasm doesn't ship with a filesystem, POSIX support tools, etc. You only get your language runtime, so they're not comparable.

Bare metal, VM, and containers all "look the same", like from a UX perspective. Wasm does not.


That's solved by WASI which is like POSIX of WASM.

Nobody is arguing that WASM today will replace containers - 2030 is a long time away in tech.


Wasi is POSIX but not like filesystem, ls, cat, etc.


WASI is not POSIX.

WASIX is POSIX :)

https://wasix.org


I though wasi recently voted to add POSIX in officially


> why use containers ? You can just ship VM images.

Lower overhead.

> clever hack to get existing apps packaged up and reduce the overhead of VMs

Yeah, exactly.


But my point is that WASM is the same thing in comparison to images - lighter weight environment for deploying applications.


For some context, this is from an “unpopular opinions” section of the podcast where they specifically ask for a controversial opinion. In the ensuing discussion, the prediction is further qualified:

> I’m not gonna say it’s gonna replace every use case; it’s clearly not. But for certain high-performance latency-sensitive use cases like trying to deliver feature flags globally to mobile apps, or web apps around the world (that is our use case)… it’s definitely very applicable to this problem.

I do think that if WASI gets sockets and threading soon as planned, it suddenly becomes compelling for a bunch of interesting workloads that containers are used for now.


> But for certain high-performance latency-sensitive use cases like trying to deliver feature flags globally to mobile apps, or web apps around the world (that is our use case)… it’s definitely very applicable to this problem.

I fail to see how it's applicable. Unless your entire world is Javascript and its performance


wasm is not bound to javascript in any meaningful sense


wasm literally came out of asm.js and browsers. And its original goal is literally to make stuff in browsers more performant compared to Javascript.

So when people say stuff like "high-performance latency-sensitive use cases are applicable to wasm" they clearly only understand performance and latency in comparison to Javascript.


it started there but that's not like the only goal of the thing

and like there's plenty of people doing wasm on the server where performance is measured in microseconds


> it started there but that's not like the only goal of the thing

According to the website and the core spec it's still the main goal

> there's plenty of people doing wasm on the server where performance is measured in microseconds

I'm old enough to remember when node.js was also marketed as "web scale" and high performance.


This is highly unlikely in either of the futures I see for WebAssembly:

1 - The best timeline: WebAssembly keeps its strong capabilities based security model, which requires explicit granting of resources to code inside the sandbox. This is great for security, but it requires giving up full access to the file system, and other I/O... which requires rewrites of things. This will, in turn, mean that many workloads won't shift to WebAssembly

2 - The worst timeline: WebAssembly repeats the mistakes of JVM, and eventually succumbs to the demands to allow full access to all of the resources of the host system, defeating capabilities, and making it a new JVM. Thus, it's not as safe as a VM, why would anyone use it? So, most workloads don't shift to WebAssembly.

[Edit/Append] WebAssembly uses a capabilities model of security. You have to explicitly give it access to resources from code outside of the sandbox, which are then passed (like file handles) to the code running inside the sandbox. This completely and securely prevents the code inside the sandbox from causing any undesired side effects in your system.

Think of it as handing someone $5 cash to make a purchase. You can't lose more than $5, no matter how confused or wrong things go after that point, you've limited the side effects to that $5.

Or think of it as plugging in a device to an outlet with a 15 Amp circuit breaker. No matter what you do, you won't draw 100 amps through the outlet, nor take down the grid (like the power in the old TV show Green Acres).

This is different than "capabilities" on a smartphone, which are "allow location access" or things like that, global, non-granular access to big chunks of your system. There is no "allow access to your file system?" flag in WebAssembly. In the best timeline, there never will be.


> WebAssembly keeps its strong capabilities based security model, which requires explicit granting of resources to code inside the sandbox. This is great for security, but it requires giving up full access to the file system, and other I/O

Can you unpack that a bit for those of us who don't know this area well? Can't "access to filesystem" be one of the capabilities that can be granted or not granted, thus giving no decrease in security because anyone who wishes to host webassembly without granting it is able to do so?


There's some quite different systems programming topics getting conflated here. This assumption that WASM is a competitor to Docker doesn't seem very sensible and it's hard to understand where it comes from. Will Python or Postgres suddenly stop having versioning and installation issues if they run on a WASM interpreter? No, the issues are at the ecosystem level, they're not related to how code is translated for the host CPU.

You can optimize the size of your containers to be pretty small, like tens of megabytes… But WebAssembly is, at its core, designed to be more portable than that.

You’re talking about tens of kilobytes, instead of tens of megabytes. And the boot-up times can be measured in microseconds, instead of milliseconds, or tens of milliseconds, or even seconds (!) for containers.

This paragraph is emblematic: it's mixing up code size, startup time, portability, containers and virtual machines of two different kinds without really linking them together or defining requirements.

The thing is, WASM isn't a new idea. The industry has a lot of experience with VMs that JIT compile portable bytecode. Startup time of any WASM program is going to be dominated by the fact that you're in the interpreter the whole time. As programs get larger they do things like load config files, connect to databases, initialize in-memory caches, possibly reflect themselves because that's often convenient, and so on. Because it's all init code none of this is hot, so it gets interpreted and that's slow. WASM offers nothing in the way of a standard library that could be accelerated, so languages have to bring their own which makes things worse. And the WASM environment is extremely minimalist, so you need to ship a lot of code with your app to make things comfortable especially as most servers aren't written in C++ or Rust and aren't going to be any time soon because that's not their sweet spot.


WASM isn't a new idea. I will say, the one thing that WASM has "right" over say, the JVM, Roslyn, or other portable runtimes is the fact you can share WASM modules across language boundaries.

I can't drop in use a Rust module in my Go code, at minimum, I'd need to dip into some kind of FFI.

With WASM, you can just import the WASM module for usage into your language, as long as it supports WASM, regardless of original source. Its a portable format and runtime. That is the key difference than VMs of the past to me at least.

It also makes sharing code that was written in say, C or C++ easier and safer too.

I agree that WASM replacing Docker is something I can't "see" right now, because its just very different, but I do think the highlights above are pretty great.

EDIT: Actually, I think I understand the point of WASM replacing Docker potentially which is, if WASM runtimes support the underlying operating systems sufficiently (like virtualization has to) then you could just compile what would be a container into a WASM module. IE, I could compile Postgres, Redis etc. to WASM (or more importantly, the projects could) and simply use their WASM binaries to support my application. Thats what I think they're going for here. Its hard to "see" it because there are so many unknowns (WASI doesn't have all the features currently needed for this to happen I don't think, for instance)


> With WASM, you can just import the WASM module for usage into your language, as long as it supports WASM, regardless of original source. Its a portable format and runtime. That is the key difference than VMs of the past to me at least.

Welcome to 2001, .NET CLR.

Welcome to 1988, IBM AS/400 and TIMI.

Welcome to 1980, Amsterdam Compiler Toolkit.

Welcome to ....

The thing WASM has getting for itself is marketing and young generations completly cluessles of the history of bytecode environments since Burroughs Large Systems was introduced in 1961.


Which is acknowledged clearly, I know there are other approaches, and I even said the idea of WASM isn't new.

Its key feature is less friction. WASM is well specified relative to other options, and doesn't rely on one language to be feature compliant, or have any one language in mind for its targeting.

CLR is naturally really tied to C# / F#. The JVM is a bit better in this regard (Kotlin, Java, Clojure, Scala, and I think Julia all leverage this) because it has a better specification for doing this.

I think WASM excels because its specification is much easier to digest and implement for. Writing WASM modules by hand isn't too terribly bad either (its a LISP even!).

WASM also has good marketing and a good place to test it. Every evergreen browser right now can run WASM code. That makes it infinitely easier to test WASM implementations. As WASI becomes more mature, it will also make it easier to develop interop as well.

I think centrally its just the right combination of good specification, ease of implementation relative to the others, and good marketing.


Julia isnt a JVM language. It’s on LLVM.


my apologies! I'd edit it if HN would let me. I was mistaken


well, that, and performance

and mindshare


Which wasm runtime beats the CLR?

Which wasm runtime has more deployments in production than the CLR since 2001?


nobody is deploying clr applications anywhere where wasm applies


The browser? Sure.


can you point me to any meaningful use of the clr in the browser? i'm not aware of any


WASM and FFIs are orthogonal, right? WASM doesn't specify how to map a Rust Option<String> to a Go string, or a C++ std::string to a Java String object. This is what I mean about things getting conflated.

Note that both the JVM and CLR do try to solve this problem, hence the name Common Language Runtime. WASM actually does far less in this area.

Still, times change. If you want language interop then these days Truffle/Graal is the state of the art to beat. It actually does define both a generic inter-language FFI, but also has a compiler that understands how to compile and inline across that FFI. So in Truffleworld calling a JS function from inside a hot Rust loop isn't actually an insane or stupid thing to do because the whole thing will be compiled and optimized as a single unit, and things like maps/arrays/etc will be properly converted between languages without any data copying.


My experience with WASM so far, is that I can import WASM modules without caring at all about FFI. They're produced, and I can use them, as is, whatever their exposed interface is.

I'm not even sure Truffle/Graal is doing that across language boundaries (yet).

Of course, in practice, this is basically C, C++ and Rust being used in the web browser or with V8 / Deno, because its the biggest WASM runtime there is right now, but thats changing too, as language communities are starting to seriously consider first party support for WASM.

To address the point of mapping types / data from one language to another: thats the whole point of WASM, is that those languages need to only bind their output to the WASM guidelines. Its all there is to it. Valid WASM will run in any valid WASM runtime


Truffle is definitely not only doing that across language boundaries but for far more languages. For example, using Ruby from JavaScript:

    var array = Polyglot.eval("ruby", "[1,2,42,4]")
    console.log(array[2]);
Invoking a C/C++/Rust main method from Python:

    import polyglot
    cpart = polyglot.eval(language="llvm", path="polyglot")
    cpart.main()
Reading a JS array from C:

    #include <stdio.h>
    #include <graalvm/llvm/polyglot.h>

    int main() {
        void *array = polyglot_eval("js", "[1,2,42,4]");
        int element = polyglot_as_i32(polyglot_get_array_element(array, 2));
        printf("%d\n", element);
        return element;
    }
More examples here: https://www.graalvm.org/latest/reference-manual/polyglot-pro...

W.R.T. the "WASM guidelines", can you point us to these guidelines and where they define how to map a Python dict to a Java HashMap, for example? The Truffle Polyglot messaging spec does define how this is done in a way that avoids performance problems because the FFI is expressed in terms of compiler IR. WASM can't even run either of these languages natively so it can't define how they interop with each other.


This is actually a pretty good step in the right direction.

Big question I have: can I just import (like I'd import anything else in a code base) the dependency and it'll "see" the exposed API surface of the module?

This looks like it does some eval of the underlying code depending on what it is, but it must be explicitly mapped by the consumer (declaring the language and the code), which I respect wholly, but its a little different from the "ease of use" perspective.

That said, its another promising approach, always interested to see how this matures.

To answer the second half of this, its not about defining how Python Dict should map to a Java Hashmap. Its about how Java or Python or whatever language maps to WASM. As long as the produced output is valid WASM, it will work across any boundary that supports WASM. Inherently, it devoids itself from having to think about any specific language this way. Right now this also produces limitations (because features have to be specified uniformly so the target of compilation is the same for everyone)

The inherent flaw of most multi-compat runtime systems, in my opinion, is making the consumer think about the underlying language. Rather, if you use Go to produce a WASM module, it can be used in Rust, or JavaScript, or C / C++ etc. without considering how it was written


Yes, depending on what you mean by "see". For a statically typed language using imports from a dynamically typed language, you'll need some interfaces and stuff to cast values to. But that's inherent. The C compiler doesn't have any concept of a dynamic type beyond void* so you'll always need to cast it to something more specific before using it. For the other way around that doesn't apply and you just use whatever you import.

You don't technically have to specify the language. The file extension is usually enough to select the right language. I'm not sure how it gets easier, unless you are asking about a sort of polyglot package manager or build system. Indeed there isn't one of those, because they're usually language specific. It would be a good improvement to have though. Right now you'd need to use npm to get JS code, pip to get Python code etc.

With respect to language mappings, I don't think WASM solves this. There is no mapping to WASM of a hashmap because the WASM type system doesn't have any such concept. Could you elaborate on what you're referring to here? As the name implies, it's a kind of assembly language, so it offers no more assistance in this task than ARM or x86 does. Moreover the approach of defining a universal "language" and mapping everything to it was tried and didn't work so great, that's sort of the classical JVM or CLR approach. Languages disagree on how core concepts work. For example a JS array is a very different thing to an array in C, a Java HashMap has a different API and behavior to a Python dict which in turn is different to the same concept in Scheme. You can make some progress by trying to compile everything through to the One True HashMap, but performance and edge cases at the boundaries become problems.

The Truffle insight is to let each language be its own thing. Instead of having One True HashMap (or whatever), you let each language work with its own concepts and memory layouts then define a standard way to export basic operations like "read member", "invoke function", "access key" in the form of compiler IR snippets. So when you invoke polyglot_get_array_element on a Ruby array it's not really doing a function call, and there is no shared array type that every language has to compile to. Instead the function call is replaced with the actual machine code that the Ruby engine would use to look up an array element, and the result is stored in a C void* but it's not a pointer to some unified object concept or even a real pointer at all. That's why you can do something like casting it to a struct of function pointers, and get something that will then directly invoke Ruby methods. The cast operation is also being virtualized. This approach sidesteps the question of how to share stuff between languages without trying to compile it all to a generic meta-language.

The operations are called the "interop protocol" which is a bit confusing because it's not actually a network protocol of any kind. It defines lots of high level mapping operations so you can do things like work with a time/date from a guest language using your host language's time/dat types.

https://www.graalvm.org/truffle/javadoc/com/oracle/truffle/a...

It's important to note here that the exported operations aren't target language specific. Once you implement the interop protocol for a Truffle language it works for every other language, there's no N:N binding work required. So it's a scalable approach.


WASM is a specification. So when you compile to it, you compile to the specification. Any language runtime (be it JavaScript via V8, Go, or whatever) or language interop via compilation (Rust, C, C++) can interact with a WASM binary if their toolchain supports understanding it. Thats the key here. I can take my JS, compile it to WASM, and Rust can use it as a module because its compiler understands WASM (in simple terms).

This all requires a WASM runtime of course, no doubting that. However, the mere fact I can take two languages, compile one to WASM and use that as the module in that other language, is what I'm talking about.

And WASM compat is taking off pretty well so far. I admire Truffle's approach, it might prove to be long term better in solving this problem, because languages can move at the speed they want to move, and the interop story doesn't really change. Where as with WASM, you have to keep up with the WASM specification and runtimes, which could be a headache if not done in a community oriented, deliberate way.

All that is to say, its up the language toolchain to deal with compiling that language to WASM, but once its WASM, its portable


I'd be curious to see a demonstration codebase where JS or other dynamic languages are compiled to WASM and then loaded into something else with automatic FFI at the module level. Because I think you'd still need to work with v8::Value objects or an equivalent to use JS from C++, WASM or not. WASM isn't meant to be a compilation target for languages like JS, what dynamic languages need from a compiler is very different to what Rust needs.


To be clear, I'm talking about the following:

I can write some code or use modules written in JavaScript that compile to WASM. I can use that compiled output with any runtime that can import WASM and expose its interface natively. To that end, I could use it in Rust, no problem.

The same is in reverse. I can write a Rust program, compile to WASM, and use it in JS runtime (lots of projects do this already, actually. esbuild does this with Go as well).

Right Now there are of course limitations, however if current trajectory holds, you'll be able to just import wasm directly into a compat runtime. So I can take the JS to WASM output and share it to any language to can understand it.

This is, as I understand it, the advantage of WASM in the future tense (and if you use JavaScript, its kind of present tense, esp. with import assertions)


I believe that's a bit strained as well - I'm pretty sure if you want to use a Rust module in Go, you also need to follow a standard calling convention analogous to cdecl in C - you'll have a hard time supporting any higher level feature across languages.

The incompatibility on higher level constructs seems like much more of a bugbear to resolve. - for example, almost all popular high level languages have some idea of garbage collection/reference counting. There's a WASM GC proposal, I'm not sure how can they make it play nice with every languages idea/semantics of garbage collection as many languages place their own particular set of restrictions on GC like interior pointers/pinning/copying etc. some of which might be mutually exclusive.

I could go on, but my point is that I believe there always needs to be an FFI between languages, that is usually barely more expressive than what a protobuf contract/C API would be (and we already have those)


JVM isn't bound to language either -- it can run bytecode generated by another language, for like, decades. But I'm afraid it has never received mainstream attention.


I mostly agree with you, but I think I can shed some light here:

> As programs get larger they do things like load config files, connect to databases, initialize in-memory caches, possibly reflect themselves because that's often convenient, and so on. Because it's all init code none of this is hot, so it gets interpreted and that's slow.

One of the most interesting aspects of Wasm is that because it is built with isolation in mind, it's really easy to get a snapshot and run it later. So instead of doing all the cache initialization etc. at init time, you can initialize the module at build time and then take a snapshot of it for a very quick cold start next time.

When running in the browser, there is still an initialization step to process the wasm bytecode, but in the server-side world this can also be done in advance since you can trust the compiler and know in advance which processor architecture you will want to run it on.


Right, but that trick also isn't new. emacs was doing it decades ago and there are JVMs that also do it. That's how GraalVM native images can start and run faster than C programs do: the app is initialized at build time and then the state is just mmapped into a pre-compiled native process.

HotSpot has a less aggressive version of this where you can pre-initialize a lot of VM state and dump it to a file for faster startup time. It's about a 30% win and that's going up over time, but it doesn't AOT compile everything.

Of course, if you're serious about instant startup from zero then you end up going the native-image route, but then what value is WASM adding? What's being shipped to the edge in that case is a compiled binary anyway.

There is some scope here for threading the needle - for example, an edge worker implementation could be given compiled bytecode of some kind and then do the compilation and dumping process server side. This is a bit like how Apple compiled LLVM bitcode itself for delivery. However then the build process needs to be sandboxed and at that point why not go full FaaS and let users upload whatever binaries they like? AWS is doing it successfully.


> This assumption that WASM is a competitor to Docker doesn't seem very sensible and it's hard to understand where it comes from.

Exactly. WebAssembly is complementary. Docker has Wasm beta support [0]. See: Announcing Docker+Wasm Technical Preview 2 [1] (Mar 22, 2023)

[0] https://docs.docker.com/desktop/wasm/

[1] https://www.docker.com/blog/announcing-dockerwasm-technical-...


> Startup time of any WASM program is going to be dominated by the fact that you're in the interpreter the whole time.

The rest of your comment I agree with, but this isn't really true. One of the advantages of engines/platforms like wasmtime is that they do indeed do AOT compilation and transparently cache JITed modules. Few Wasm runtimes have interpreters. So the fast startup that you get on the Fastly platform is primarily because Wasm modules are precompiled and instantiation has been heavily optimized.

I think overall that Wasm and containers solve mostly orthogonal problems, but there is some overlap in the serverless space. If programs are small and have few dependencies that can be inlined into a Wasm module, Wasm will do well against containers. If you have a very large software stack that has a lot of stuff in the container or base image, Wasm won't be as competitive; at least yet.


I suppose the question is why do the compilation server side? With a Linux sandbox policy as restrictive as WASI is, like by blocking all but a handful of syscalls, is there much difference between native machine code and WASM from the cloud operator perspective? Is maybe the issue here that low level Linux sandboxing with bpf and stuff isn't well documented, or maybe not trusted?


For one thing, Wasm gives CFI out of the box, so binaries you run are not vulnerable to RCE. Also, it's portable, so you can run on, e.g. arm servers.


WASM can be pre-compiled I assume.

A basic use case for moving from containers to WASM is to provide the option for custom backend code for users in a more lightweight way. For example, I have a site that allows procedural image generation via an API and I originally built it using containers. It could be much less resource-intensive and faster startup if I could run that code in web assembly.

Or, I was building a generative AI site that creates custom web applications. In order to allow backend code, I set it up so everyone gets their own fly.io VM/container(s). This is very flexible but not very fast startup time and the smallest option is like 128MB or 256MB RAM which needs to stay loaded otherwise they will have to wait for another restart.

I never really needed to provide arbitrary Linux capabilities, just sandboxed backend code. So if I do something like that again I will try to use web assembly instead. Probably with lightweight TinyGo or Rust-based request handlers or something like that.


Owner of the purposefully controversial opinion here :) Looks like its proven to be very controversial, which was the stated goal of the podcast segment :)

For context, I had just returned from KubeCon + CloudNativeCon EU in Amsterdam. I had a chance to talk to many of the great folks pushing the WASM ecosystem forward, and there certainly seemed to be a large amount of momentum behind using WASM in the server space and at the edge. I'd had a great time learning from the folks at various open-source projects / companies pushing WASM forward:

- https://bytecodealliance.org/ (shout out to Bailey)

- https://cosmonic.com/

- https://www.fermyon.com/

- https://wasmcloud.com/

At DevCycle we are using WASM to help share core Feature Flag decisioning code across our Cloudflare Worker APIs and our server SDKs for local decisioning. We've learned a bunch about compatibility / optimizing performance / limitations that I'm happy to chat about in more detail. But we are generally excited for the future of WASM, especially for the new features coming: Component Model / Threading / Garbage Collection changes / Networking.

If you are looking for how to use WASM Components to replace container-based runtimes, look at the companies linked above.


If you were around ~25 years ago, you likely remember the idea that JVM will run everywhere and replace everything. (And some XML on top.)

Indeed, JVM can run everywhere, and has reached far and wide, from SIM cards to huge servers. But it has not replaced all other runtimes, even when it achieved really good GC and and near-native speed thanks to JIT compilation.

WASM is very similar to JVM in many regards. It will reach far and wide, it will supplant a few things, but I bet it's not going to replace too many things, let alone native code in containers.


Security and universality are the biggest differences here.

WASM was ground-up designed with security in mind (learning from earlier projects like pNaCl). Likewise, WASM was designed with a lot of language targets in mind rather than just Java.


Sure, if we ignore internal memory corruption inside linear memory.

JVM might not be the ideal example of polyglot runtime, yet there are several others of the same vintage, designed with multiple languages in mind, including C and C++, e.g. CLR, TIMI,...


WASM is a compiler target for countless languages, unlike JVM which is tied to Java. That’s what makes it incomparable to JVM… It’s a global runtime-on-demand for whatever you throw at it.


In relation to JVM true if we ignore the evolution of the ecosystem.

Then there are the CLR, IBM TIMI and plenty of other ones all the way back to 1961.


JVM is not tied to Java. Get basic facts right.


My prediction, is by 2030, a new generation of developers will replace WebAssembly with a brand-new shiny thing that will be clearly superior to what the then-senile and geriatric developers of the clearly-flawed WASM implementation have created, as it will be based on fresh news ideas (tried first in the 1980s).

This new thing will address all the shortcomings of WebAssembly, and will be projected to become the standard portable way of running applications by 2040.


While I've only briefly examined the WebAssembly (WASM) security model, it seems like a recurrence of vx32 and NaCL. One inherent issue is the "glue" code enabling the WASM runtime to invoke necessary operating system (OS) primitives, such as memory allocation, file system and network access.

The appeal of vx32/NaCL/WASM lies in its bytecode (or verified x86 asm stream), which cannot operate independently due to its lack of access to syscalls or other inter-process communication (IPC). However, a program without syscall access is essentially futile, limited to using the CPU's arithmetic logic unit, reading/writing memory, and potentially handling input/output buffers.

Soon enough, developers will require access to modern amenities offered by OSes such as memory mapping, file descriptors, and async IO. Consequently, OS bridge creators will supply proxies to the kernel, some of which will be improperly implemented. Given their complexity and size, these proxies may essentially become exploitable mini OSes.

There's no perfect sandboxing, some forms appear more trustworthy (or more thoroughly scrutinized by researchers) than others. WASM may seem safer than vx32/NaCL because it lacks bytecode verification issues; the bytecode is fully understood, unlike verifying x86 instructions (which are prone to erratas). However, the bridge/glue/proxy/broker problem remains unavoidable.


What you are describing is basically WASI [1]. Yes there are exploit concerns in the bridge layer, but it is far easier to sandbox these capabilities-based modules, than native programs. For example, many WASM runtimes are written in Rust, which greatly decreases the risk of exploits in general.

1] https://wasi.dev/


“So you basically have the full SpiderMonkey runtime running within WebAssembly, running your JavaScript or compiled TypeScript code…”

From my old cynical curmudgeon viewpoint, this may be a valid approach, but you need to find a way of saying it that doesn’t sound so ridiculous. A wasm SpiderMonkey runtime running inside a native SpiderMonkey runtime? And presumably the inner runtime supports wasm?


I still don't understand the point of WASM-only runtimes when V8 can run WASM and JS.


A WASM runtime can be dramatically smaller, which may be important in some scenarios.

WASM runtimes should grow to a maturity level somehow comparable to V8, JVM, .NET runtimes before they start competing with them. Currently WASM accompanies other software, and I find it pretty correct.



> 1 tight security model

> 2 very fast boot-up time

> 3 scalability at the edge

> 4 much smaller footprints

> 5 portability across environments

Aren't #2-4, just wasm rediscovering static linking?

95% of the advantage that docker has over a tarball is due to gcc's (and glibc's) strong bias towards dynamic linking and how hard it is to portably package a python application's dependencies.


I think of the Pro-WASM argument as trying to tightly define stricter runtime capabilities model and static linking bare container style.

N wasm operations to guard rather than N syscalls.

The cloudflare worker infrastructure is very cost effective. The thing that I wonder about portability is side channel defense mechanism responsibility.


On Linux you can block off syscalls by number these days. If you only want to expose read and write on some pre-prepared file descriptors you can do that. This leads to the question of what security is being added by the bytecode layer. You could argue that you're insulated from CPU errata, but cloud and hosting vendors have been selling raw CPU access for decades without real issue beyond side channel attacks (patched, also affects you even with WASM).

WASM came out of the web and is needed/wanted by the web folks because they can't rely entirely on operating system kernel sandboxing (Windows is dubious, macOS is powerful but private/undocumented API), and because they don't like the idea of implicitly incorporating every possible device's CPU ISA into the HTML5 spec. Neither issue applies on Linux servers where nothing is a formal spec anyway and the Linux sandboxing facilities are powerful and open.

JIT/interpretation can only slow things down over using native code, so "fast startup" is probably being confounded by the fact that WASM programs tend to be written in C++ or Rust.


Another often-overlooked advantage of Docker is the fact that its checksummed tarball of your base container image can be shared with other container images. This can cut down on data transfer and data size if you use the same base image for everything because it's only transferring delta images.


Most of our services use the same base image and some of them run in the same host.

The main thing to watch out for is that the steady state disk usage is different from the worst case. During upgrades the amount of disk used can balloon up for a little while, and these sorts of transient costs tend to mess with people’s ability to do capacity planning properly.

So there’s a law of diminishing returns for pushing things out of your images into a common base. The more you do the larger the over/under gets. Spend a good bit of energy in shrinking the base image, not just growing it, and you’ll be okay.


This doesn't even really apply to wasm though. That is the big difference. You would have a runtime that can execute wasm, and its all the same. Any browser can run wasm provided you don't try unsupported sys-calls.


The point I'm trying to make is that what is being described as a "revolutionary" approach -- that of putting all of your application code into a single binary -- is just static linking but using wasm plus a small JavaScript engine. It's revolutionary if your whole experience is with Node.JS inside of Docker, but from my perspective it's the rediscovery of a 50 year old technology. Just with TypeScript/JavaScript in a browser engine.

To be clear this is a great idea and will solve a lot of problems, but it's not like people haven't thought of this before today.


I think 2 is more about how a single wasm interpreter can host multiple "serverless applications" and execute them without the process (/language runtime) spawning overhead.

> 95% of the advantage that docker has over a tarball is due to gcc's (and glibc's) strong bias towards dynamic linking and how hard it is to portably package a python application's dependencies.

Mhm. docker, flatpak, snap, appimages .. so much complexity is basically us dealing with linux dll hell imo.


Perhaps not coincidentally, #1 and #5 seem like the most significant features from this list.


and #5 is just rediscovering the JVM



Partly, but the capabilities exposed by WASM are different. JVM is fundamentally classic-OOP. And you might have reason to trust WASM bytecode as a standard more than JVM bytecode.


Probably not. We have virtual machine runtimes which are more secure and more generic than containers, but we still use containers because they are much more efficient. WebAssembly provides only some of the goodies of VMs while taking away most of the efficiency of containers... they have their use, but I don't see why we expect a runtime to take over the others.


Is it correct that WASM probably won't affect most CRUD apps, since most don't need significant client-side compute or graphics resources?

But that WASM might enable intensive applications (e.g. cutting edge games, or 3D rendering software like an architect might use) to run in the browser, rather than on the OS, thus enabling developers of that software to make much more frequent updates (just like any web application)?

What would this mean for page load times (e.g. surely not a 700MB program downloading when the user visits a URL..?.. or.. can it?)

(apologise for the newbie questions)


WASM probably won't affect most websites today, correct.

WASM probably won't enable cutting edge games. It has been around for years and there's no interest from the gaming community there, the newest additions to WASM like GC won't affect games much. There are other issues with the web platform that prevent this from working, e.g. the implicitly managed disk cache, the relatively old graphics APIs exposed into the sandbox, the performance impact of sandboxing, and a few other things. The games world has Steam and consoles, that seems to be good enough for them.

There might be some attempts to port desktop productivity software to the web using WASM, probably in partnership with Google, like what an architect might use. But probably not. Installing software isn't hard and pro grade software isn't an impulse buy. How much incremental revenue would a web port bring relative to the cost? Probably not much.

700mb programs downloading when a user visits a URL isn't necessarily a problem if that download actually sticks, but see caching issues above. After all, people routinely download programs that large. Chrome itself is ~500mb on disk!


WASM is not just a browser technology, it can be (and is heavily) used for edge and servers.

I can write code once and run on many runtimes across many different programming languages.

I think the server use-cases are actually more compelling than the browser use-cases.

For example, Fastly's edge compute supports running WASM code at the edge. Similar to lambda, but in the language of your choice and with great sandboxing qualities baked in.


WASM had some neat/gross implications for blackbox CRUD apps without having to show source. Some people might even start embedding secrets in web source code again!


> I’m not gonna say it’s gonna replace every use case; it’s clearly not. But for certain high-performance latency-sensitive use cases like trying to deliver feature flags globally to mobile apps, or web apps around the world (that is our use case)… it’s definitely very applicable to this problem

Behind the clickbait headline, a much narrower and reasonable claim.


This is a phenomenon we see over and over with our "Unpopular Opinions" segment. The summary statement is often unpopular, but once you hear out the person's reasoning (unless it's about food or something purely subjective) most people end up agreeing with them.

It's been tough to get _truly_ unpopular opinions (as voted on by Twitter/Mastodon users) represented on the pods.


What are some truly unpopular opinions?


How big are app downloads going to get!?


This article discusses replacing your container/VM backend with a tiny wasm bundle probably run at the edge. The app sees no changes to the API requests it makes.


“wasm boots fast, containers boot slow”: as someone who uses neither (unless you count cgroups), i assume there’s a reason that ”classic” containerization is slow? like, your container today provides a complete linux userspace with a filesystem, scheduler, user/permissions enforcement, elf loader/dynamic linker, service manager (systemd/openRC/whatever), maybe a DNS stub resolver, etc — and maybe you don’t actually need half of these? what does using a new bytecode/architecture have to do with optimizing any of this? is it just a sneaky way of swapping out libc and the abstractions beneath it (and if so, why not do just that on “plain” x86_64)?


Hm. Controversial opinion is a good way to describe it.

The thing is: Docker or Podman is just the runtime coordinated and controlled by Kubernetes or Nomad in our case. Nomad practically already supports a hypothetical WASM runtime or offers simple hooks into this: You have e.g. a java driver in Nomad, which downloads a fat jar from an artifact store, cgroups it as far as I know and launches it with one of the available JVMs. There's a bunch of these drivers, even down to IIS pools.

Some generic WASM runtime would slot right in there.

However, we don't use them. Once you use them, you're again coupling the application runtime to OS updates and management. Staying in the Java example, currently teams are entirely free to use whatever Java version they want. If they want to update multiple times per year to the latest versions - entirely feasible for tight services with low technical debt and good tests - they can do so. As soon as the JVM becomes part of the OS configuration again through the Java driver, us as in infra-ops are immediately blocking all Java updates across the entire infrastructure again. And on top of that, we have a team providing base Java images with a jvm baked in, so most nomad workers pull the JVM layer once after it's released and then the layer is cached locally forever until no application needs it anymore. So it's not much more storage-expensive than putting the JVM on the disk directly.

But that's the smaller part of the problem. Nomad isn't just the driver runtime. I can tell Nomad to run 5 instances of an application and to spread them across AWS regions, or VSphere HVs / Racks as well as possible. And Nomad will implement and maintain this, almost regardless of what the infrastructure does. I can drain workers and shift applications around without really knowing them, since this management is configured in the jobs.

Again - this decoupling of OS and infrastructure management and application management makes a container orchestration valuable at a larger scale. The runtime below it is an important detail, but it's a detail for average REST service deployments. A WASM runtime may slot in as being faster and more cost efficient or it may not. Who knows.


If WASM can overcome the current extremely sluggish development speed, I can definitely see some applications of containerization move to WASM, but I really think like containers and WASM are two very different solutions.


Bold to make a prediction like that. But I think perhaps the author doesn't understand Kubernetes as much -- you can run Webassembly as the runtime for Kubernetes, and get the other things Kubernetes provide, such as self-healing. It's because Kubernetes is not built monolithically, and things like the runtime or kubelet can be replaced with something else.

https://developer.okta.com/blog/2022/01/28/webassembly-on-ku...


Reasons why WASM will not replace containers:

- System calls call into the Linux kernel for things like disk and network access. These are highly tested and reliable.

To keep WASMs isolation and portability it would need to replace these APIs with new message passing ones which would kill performance.

- C FFI

- Dynamic libs

Being able to integrate with existing C libraries in the way they are intended to be used.

- Better VMs/OS level sandboxing

MicroVMs like Firecracker that let you run your code unmodified as if it is running directly on the OS.

I like WASM, but feel it is over hyped and the alternatives are “good enough” and already widely used.

In the browser it is great though.


If that was true we would have already all switched to Java


My understanding is that the goal (if not there already) is that you can use wasm tech just like docker/etc. The value prop is wasm would be lighter and have other built in benefits with the adoption of newer browser (security) features like sandboxed file system integration. This would be made available without the overhead of chrome per se. forgive me if I overstate something but this is the gist.


I can see that happening in the future, in a few decades, tiny modular runtimes with low overhead that start in a few milliseconds, probably an entirely new paradigm of backend development to go along with it. Hard to imagine that now of course so everyone will just laugh, just like everyone laughed at docker.


Mmmm don't think so, no. They solve entirely different problems. I also don't think containers will be around in 2030, there will be more fleshed out OS-supplied facilities for general purpose sandboxing I'd think.

Title is clickbait anyway, and such blatant and Shameless clickbait at that.


> To get the full experience you should listen while you read.

> Click here to listen along while you read. It's better!

How could it possibly be better to listen to some people talking thus distracting you while you're reading?


The whole thing feels hallucination from someone who didn't spend enough time with either WebAssembly or containers and simply didn't understand how these things work.


What is the overhead for WASM like, vs bare metal? I know containers generally run as fast as regular processes, but I imagine there is some interpretation overhead for WASM.


containers are FAR slower than regular processes, by almost all performance metrics

wasm is orders of magnitude faster -- actually comparable to native execution, at least on the server


No. Only NAT is slower to any real degree.

https://stackoverflow.com/a/26149994


your link describes a very narrowly scoped benchmark, not something that is generally applicable


I don't see this catching on when an effectively similar thing (Java app servers) failed to catch on. Containers are just easier.


When can I start shoveling my python/pytorch/CUDA spaghetti into WASM?

I would like serverless or at least rapid booting GPU functions...


It makes sense. Why worry about hosting your own server, when you can just execute one locally in the visitor’s browser?


True! I think it will happen faster and they set the date so far in the future to not encourage a flame war.


Containers im not so sure but i can see WASM runtimes be integrated into edge functions like services.


Can we get to a point where we ship binaries NOT source code?


I honestly dread the world if this happens. WebAssembly is dog slow. It essentially means we will set back literally trillions of dollars of hardware improvement.


Buzzwords all way down!


no they won't.


I mean, to state the obvious....no, no they won't.


It's not obvious to everyone. Some of us would need to hear some reasoning before we'd find this line convincing.


Does it really make sense to run so much inside a web browser?

Are we perhaps over-utilizing web browsers?


WASM is not strictly for the web, its a generic low-level language target/series of specs.

The article itself is talking about server runtimes/workloads, nothing to do with browsers.

If generic specs can be built around WASM in such a way that you can run code securely without booting an entire OS, WASM will replace traditional containers. Smaller runtime footprint.

A consequence of having a generic compile target/spec is that your applications will also be able to run in the browser with minimal effort


> WASM is not strictly for the web, its a generic low-level language target/series of specs.

What about it is preferable to MLIR or even LLVM?

https://mlir.llvm.org/


WASM is designed with security in mind, and specifically to provide primitives for use in a secure runtime.

LLVM doesn't provide that to my knowledge... instead just cross-compiling code specific to each target. Correct me if mistaken.

I imagine there are meaningful build time drawbacks to compiling an executable for each environment as well. While WASM will never outperform native code, it can get very close. Minor loss in performance is a small price to pay for portability/security.


At the risk of taking the bait, I'll point out that WASM doesn't just run in web browsers.


For those of us who have paid no attention to WASM - what is the difference between WASM and old tech like Java Applets? They seem to have similar goals - write once and run it on all the things... Applets had issues, but the pitch seems similar, no?

If that is the case, why was a new standard necessary vs. bytecode from JVM or CLR, for instance?


It's an open standardized bytecode that's specifically designed to be easy to implement, which means there are already a lot of different runtimes that implement it, so you're not required to have for example JRE or a .NET runtime installed. Because of this simplicity it is also an easy target to compile to, which means that many languages can already be compiled to it.


Of course you're required to have a WASM runtime to run WASM bytecode. How else could it run? It's no different to a JVM or .NET in that regard.

JVM bytecode is open, standardized and quite easy to implement. CLR bytecode likewise albeit less easy to implement. You can implement a simple JVM on your own. Avian is an example of a lightweight JVM written by two guys (albeit very productive guys) and that didn't only implement the spec but had a proper JIT compiler, AOT compiler and semi-decent garbage collector too! In practice, ease of implementation isn't very important for this sort of thing. Having a good open source implementation is sufficient. WASM has this in V8 and other than the rewrite-it-in-rust factor it's not super clear (to me?) what other runtimes bring to the table?

> Because of this simplicity it is also an easy target to compile to

No, this is backwards. WASM's simplicity makes it harder to compile to, not easier, because language implementations have to do more themselves. That's why even after many years the only languages people are using with it are languages with very minimal demands on the runtime, like C or Rust. Languages with more advanced compilation requirements like GC integration (modern GC algos affect the compiler too) have been ignoring WASM so far, and this is why V8's JavaScript engine isn't itself running on WASM.


> so you're not required to have for example JRE or a .NET runtime installed

These days you are not required to have a JRE or .NET runtime installed. At least on the JRE side, it can be embedded in the executable - I assume .NET has something similar.

There has to be more to it than that, though. Perhaps it is the "easy" part you mentioned - I have no idea what's involved in building a greenfield java bytecode interpreter, for example.


Java’s bytecode is extremely simple. You could implement a PoC interpreter in a day if you put your mind to it


Interesting, and good to know.

I guess that brings me back to my original question - why WASM vs. some existing bytecode interpreter? What makes WASM the best choice, and why does it exist?


WASM is "some existing bytecode interpreter". It's widely deployed, i.e. it's in all major browsers already, and has been for a few years now. That alone makes it a good compilation target.


It wasn't when WASM was invented, though. That was the question - why make something new when existing bytecode systems would have been made to work and achieve the same goal.


I think that this is a silly question. "Could have been made to work" is far cry from "best choice".

Browser vendors agreed that they could standardise something lower level than JavaScript, in the browser, and is is the chosen outcome. Why _not_ make something new? There is experience doing it, so it's possible, is a known engineering task, and it can be built to the specific requirements. And by making a new one, you don't have to buy into someone else's brand or ecosystem.

The rest is historical details.

Ultimately, asking strangers on the internet to justify software architecture design decisions that happened years ago and worked out rather well, is for the birds.

In future the question is going to be less "why use WASM when JVM is right there, one install away", and more "why use JVM when WASM is right there, zero installs away" .


wasm has properties that are unique versus other existing bytecode systems

those properties are important


they're indeed very similar

the main difference distills down to execution/implementation

java applets were slow, wasm is fast

that distinction makes a categorical difference


It seems there would still be some sort of interpreter for WASM - meaning any performance gains could have been made using existing bytecode interpreters. I'm probably wrong on that assumption, but am unclear why.

I guess I'm wondering what makes WASM so much better than investing engineering energies into existing things.


wasm compiles to "native" code, for whatever "native" means in your execution environment

if you're deploying wasm to the browser, then native means your JS engine

if you're deploying wasm to the server, then native means actual machine code

the competitive advantage versus java is (at a very high level) the language design -- java is much higher-level than wasm, which limits its potential


I didn't realize that it was used outside of web browsers too.


I have not listen to this podcast. Given the development speed I'm seeing from Webassembly, I really don't think so.

Also it seems to solve a different problem, yes some thing containers are useful for Webassemtly might be really useful for.

But a lot of other things I just don't see it. Will I run databases compiled to webassembly?

That said I really think Webassembly is cool and like many of the ideas. Will listen to this podcast, lets hope I am convinced.

Edit: Good title marketing




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: