Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Recently I have been wondering if systemd solves problems that are becoming less and less relevant to developers. New services are often deployed as containers.

While systemd has a bunch of container-related functionality, it does not integrate well into the Kubernetes or even Docker workflow. It's used very little in those environments.

If you are building CoreOS or NixOS system images, or traditional Linux system services, then systemd matters. But I think way more services are being built for the container world where these problems are solved differently.

For example, the TLS configuration can be handled with common container patterns. The author's startup example would translate more easily to a full-blown Kubernetes environment once the VC funding hits their bank account if they had used containers from the start instead of first writing the service for systemd.

It's a shame because systemd is very powerful and I've enjoyed using it.



As a developer I prefer using systemd instead of containers to deploy Golang applications.

Without (Docker) containers it is:

- build Go binary and install it in production server

- write and enable the systemd unit file

With (Docker) containers it is:

- write Dockerfile

- install Docker in production server

- build Docker image and deploy container in production server

I get the appealing of containers when one production server is used for multiple applications (e.g., you have a Golang app and a redis cache), but the example above I think containers a bit of an overkill.


i feel the same way, systemd also has some comprehensive sandboxing capabilities built-in.... i have my gripe with systemd too though. mostly with journald because it is slow, likely due to its on disk format, really could have used sqlite for this...


Same. I deployed a fleet of transcoding servers with the worker logic being a simple Go program. It was super simple with systemd.


Without docker also:

* have a production outage because your libc was updated and now your go apps (which are dynlinked against it by default) won’t start

* mess around with low level cgroup settings if you need to oversubscribe safely

* cry in a corner the second you also need some python libs installed to do some machine learning or opencv or whatever on the side


And if you really want to make a container image for any reason, you can still have systemd use that directly as a Portable Service instead of through Docker.


Those of us using Java, such problems were already kind of irrelevant in 2005.

Where you deploy your EAR/WAR file doesn't matter, so the application container can be running on Windows, any UNIX flavour or even bare metal, what matters is there is a JVM available in some way.

Also on the big boys UNIX club (Aix, HP-UX, Solaris,...) systemd like alternatives have been adopted before there was such an outcry on GNU/Linux world.

On cloud platforms if you are using a managed language, this now goes beyond what Java allowed.

You couple your application with a bunch of deployment configuration scripts, and it is done, regardless of how it gets executed in the end.

The cloud is my OS.


Containers might be popular in startups' "pay five figures a month to $CLOUD_PROVIDER" scene when VCs rain infinite free money, but there are still plenty of occurrences where you have to deal with old-school physical machines where it's often easier to just run the software on the bare-metal rather than using Docker and yet another layer of abstraction.


I run a bunch of services written in python on 50+ bare metals and I can tell you it made my life easier to dockerize everything.


Nomad has a systemd-nspawn driver in the community section, though. https://www.nomadproject.io/docs/drivers/external/nspawn


You can just use podman to run Docker containers. That workflow is honestly what I wanted years ago when I first used docker, where containerization is put in the core system, and you can progressively add containerization to your core services while also running a full container on top of the same runtime.


> New services are often deployed as containers.

That's another problem to be solved.


One reason why application containers are successful is because they eliminate the complexities of a single system where multiple services are running and potentially interfering with each other.

There is no need for PrivateTmp= or some of the other configuration shown in this article because the application container already runs in a separate environment.

I think this is worth considering with respect to this article, even though containers definitely bring their own problems.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: