Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The future of compute is fine-grained. Cloudflare Workers is all about fine-grained compute, that is, splitting compute into small chunks -- a single HTTP request, rather than a single server instance. This is what allows us to run every customer's code (no matter how little traffic they get) in hundreds of locations around the world, at a price accessible to everyone.

I don't think this holds any truth at all.

Cloudflare Workers were designed to be so "fine-grained" because their whole rationale is to have very small compute steps at each request to do a very small touch-up at each request/response, With negligible or tolerable performance impact.

This is not a major paradigm change. It's a request handler placed on edge servers to do a minor touch up without the client noticing. Conceptually they are the same as a plain old Controller from Spring or Express. They just have tighter constraints because they run on resource-constrained hardware and handle performance-constrained requests. Other than this, they are a plain old request handler.



Considering you're engaging the tech lead of the tech in question, it's intriguing what you mean by this. Is it that kentonv is lying or that they're mistaken, or something else?


To be clear, I'm the tech lead of Cloudflare Workers, and wrote the core runtime implementation. Sorry, I should have stated that more clearly above.

While minor request/response rewrites were an early target use case, the platform is very much intended to support building whole applications running entirely on Workers. We do think this is a major paradigm shift.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: