Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Even the cold start and memory benefits could largely be replicated for native code if you write a custom kernel ... people act as if lightweight isolation is fundamentally impossible without a JITted platform

I think when most people refer to this as being "impossible", they mean practically impossible, not dis-allowed by the laws of physics. "First write your custom kernel" is a step only a miniscule proportion of projects can justify.

As to unique capabilities - none of its capabilities are really earth-shattering in and of themselves. But together and combined with the fact the barrier to entry is so much lower than alternatives like custom kernels, it's quite understandable why people see it as bringing something new and exciting to the table.



I don't understand why we need a 'custom kernel'. Have your serverless server give you the specs for whatever the heck they want to run and compile against that. Why do we need a new kernel?


I could have better explained what I was talking about. The point of a custom kernel would be to get extremely fast startup and context switch times, and extremely low per-process memory overhead, by doing things that Linux doesn't do - either because they're unnecessary or unhelpful for typical Linux use cases, or because they require making stronger assumptions about what userland needs to do. As a relatively flashy example, if you can assume that processes don't need overlapping addresses, you can share page tables between processes and rely on PCID/ASID to ensure each process can only access its own memory. [3] But there are also simpler things like - on Linux, starting a process requires forking the parent process, creating copies of its page tables, only for `exec` to immediately throw them away. Even posix_spawn is implemented on top of vfork. That's fast enough for Linux, but it's unnecessary. A custom kernel that doesn't provide Unix compatibility could dispense with it.

To be fair, none of that may be necessary. This blog post is advertising 1-2ms startup time. Based on a little benchmark [1] I just ran, on my desktop, Linux can execute a trivial process (/bin/busybox true) from start to finish in 0.4ms. A real application executed normally would obviously take much longer to start up. But I think their 1-2ms number is based on snapshotting the process after initialization, so a fair comparison would do that on native as well. Full process snapshotting for native processes is something that has been done before ('unexec', CRIU) but admittedly isn't very common, and I don't have a good way to test it. With snapshotting, there won't be any code executing in the process at startup, so most of the initialization time would be eliminated, but the mere act of loading more memory into the page table would have some cost. But probably not very much.

However, Fermyon uses Wasmtime, and Wasmtime's blog post on snapshotting [2] advertises single-digit microsecond startup, which is pretty cool. Starting native processes in single-digit microseconds is out of the question under Linux. But a custom kernel might be able to get there.

Edit: I guess I should add something. I'm not saying a custom kernel should just be table stakes. Writing one is obviously a major commitment and requires a high level of expertise. But you could say the same thing about JIT runtimes such as Wasmtime. There is even some overlap between what they're doing (i.e. loading binaries and managing processes). A kernel and a JIT are still fairly different projects, but they both require similar types of expertise, and I think the level of complexity is also comparable.

[1] https://gist.github.com/comex/0402dc95afb1c2ca3076d5b1b64bcc... [2] https://bytecodealliance.org/articles/wasmtime-10-performanc... [3] https://stackoverflow.com/questions/73753466/how-does-linux-...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: