For real. Show me a company that has scaled RoR or Django to 1 million concurrent users without blowing $250,000/month on their AWS bill. I've worked at unicorn companies trying to do exactly that.
Their baseline was 800 instances of the Rails app...lol.
I'm not going to name-names (you've heard of them) ... but this is a company that had to invent an entirely new and novel deployment process in order to get new code onto the massive beast of Rails servers within a finite amount of time.
I've scaled a single rails server to 50k concurrent, and so if Rails is the theoretical bottleneck there, and we base it off scaling my meager efforts, that's only 20 servers for 1 mil concurrent, or around $1000/mo at the price point I was paying (heroku).
Rails these days isn't the top of the speed meters but it's not that slow either.
“Rails can’t scale” is so 10 years ago. It’s often other things like DB queries or network I/O that tend to be bottlenecks, or you have a huge Rails monolith that has a large memory footprint, or an application that isn’t well architected or optimized.
Sounds impressive until you realize that there’s 86400 seconds in a day and so even if majority of those happen during business hours thats still firmly under 200 qps per server. On modern hw that’s very small. Also what instance size?
The language/runtime certainly has an impact. But indeed, in reality there is no way to compare these scaling claims. For all we know people are talking about serving a http-level cache without even hitting the runtime.
This is trivial with epoll(7) or io_uring(7). What you are describing "5 ec2 instances" could likely be attributed to language and/or framework bloat but hard to know for certain without details.
Raw php scripts, no ORM either. It has very good abstractions for some logic and for some other parts it is just a spaghetti function. Changing anything is difficult and critical so we are not able to refactor much.
> Show me a company that has scaled RoR or Django to 1 million concurrent users without blowing $250,000/month on their AWS bill.
Can you deploy something to vercel that supports a million concurrent users for less than $250K/month? What about using AWS Lambdas? Go microservices running in K8s?
I think your infra bills are going to skyrocket no matter your software stack if you're serving 1 million+ concurrent users.
"without blowing $250,000/month on their AWS bill". The point is that you don't need AWS for this! You can use Docker to configure much, much cheaper/faster physical servers from Hetzner or similar with super-simple automated failover, and you absolutely don't need an expensive dedicated OPS team for that for this kind of simple deployments, as I read so often here on HN.
You might get surprised as how far you can go with the KISS approach with modern hardware and open source tools.
You ain’t replacing 250k/mo worth of ec2 with a single hetzner server so your “super-simple failover” option goes out the window. Baremetal is not that much faster if you’re running ruby on it, dont fall for the marketing.
I never said that you should only have one server on Hetzner. For the web servers and background workers, though, scaling horizontally with docker images on physical server is still trivial.
By the way, I was running my startup on 17 physical machines on Hetzner, so I'm not talking from marketing but from experience.
Their baseline was 800 instances of the Rails app...lol.
I'm not going to name-names (you've heard of them) ... but this is a company that had to invent an entirely new and novel deployment process in order to get new code onto the massive beast of Rails servers within a finite amount of time.