Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You can put everything in containers and still not need much orchestration, though. My personal projects run in dozens of containers, and the "orchestration" consists of a Makefile include pulled into each project that creates a systemd service file based on some variables, and pushes it into the right place. The service files will pull down and set up a suitable Docker container. The full setup for a couple of dozen containers is 40-50 lines of makefile and a ~20 lines or so of a template service file.

Of course it won't scale to massive projects, and for work, I occasionally use Kubernetes and other more "serious" orchestration alternatives, but frankly it takes fairly big projects before they start paying for themselves in complexity.

Meanwhile, my docker containers has kept chugging along without needing any maintenance aside from auto-updating security updates for several years.

I do agree with you that Kubernetes may encourage patterns that are useful, though. But really the most essential part is that you can find relatively inexperienced devops people who have picked up some Kubernetes skills. That availability make up for a lot of pain vs finding someone experienced enough to wire up a much leaner setup.



When you deploy a new version of a container, how do you avoid downtime? Do you start a new container running the new version, wait for the new container to be ready, switch traffic to the new container, stop traffic to the old container and drain connections, and then stop the old container?


For my home projects it doesn't matter. For work projects, yes. It's an easy thing to automate. Incidentally most of the pain in this is that most load balancers are reverse of what makes most sense: the app servers ought to connect to the load balancer and tell it when it can service more requests, not get things pushed at it.


> most load balancers are reverse of what makes most sense: the app servers ought to connect to the load balancer and tell it when it can service more requests, not get things pushed at it

Reminds me of Mongrel2


Yes, Mongrel2 is an interesting design. So many things get simpler when you invert that relationship.


> Of course it won't scale to massive projects

Most projects aren't massive -- at work, 2 years on, we're still using a single instance of a single node, with the only component that needs to be reliable stored as static files in S3.


Absolutely. Which is one of the reasons I find things like Kubernetes overkill for most setups.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: