Not seeing complaints doesn't mean they don't exist.
Not to mention ui latency that is common in electron apps that is just a low-level constant annoyance.
Now. I was at Red Hat at the time, in the BU that built podman, and Docker was just largely refusing any of Red Hat's patches around rootless operation, and this was one of the top 3, if not the top motivation for Red Hat spinning up podman.
You'd have to point me to those PR's, I don't recall anything specifically around rootless.
I recall a lot of things like a `--systemd` flag to `docker run`, and just general things that reduce container security to make systemd fit in.
Absolutely none of this is true.
Docker had support contracts (Docker EE... and trying to remember, docker-cs before that naming pivot?).
Corporate customers do not care about any of the things you mentioned. I mean, maybe some, but in general no. That's not what corps think about.
There was never "no interest" at Docker in cgv2 or rootless.
Never.
cgv2 early on was not useable. It lacked so much functionality that v1 had.
It also didn't buy much, particularly because most Docker users aren't manually managing cgroups themselves.
Docker literally sold a private registry product. It was the first thing Docker built and sold (and no, it was not late, it was very early on).
for the record, cpuguy83 was in the trenches at docker circa 2013, it was like him a handful of other people working on docker when it went viral, he has an extremely insiders perspective, I'd trust what he says.
I mean you can say that, but on the topic of rootless, regardless of "interest" at Docker, they did nothing about it. I was at Red Hat at the time, a PM in the BU that created podman, and Docker's intransigence on rootless was probably the core issue that led to podman's creation.
I've really appreciated RH's work both on podman/buildah and in the supporting infrastructure like the kernel that enables nesting, like using buildah to build an image inside a containerized CI runner.
That said, I've been really surprised to not see more first class CI support for a repo supplying its own Dockerfile and being like "stage 1 is to rebuild the container", "stage two is a bunch of parallel tests running in instances of the container". In modern Dockerfiles it's pretty easy to avoid manual cache-busting by keying everything to a package manager lockfile, so it's annoying that the default CI paradigm is still "separate job somewhere that rebuilds a static base container on a timer".
Yeah, I've moved on from there, but I agree. There wasn't a lot of focus on the CI side of things beyond the stuff that ArgoCD was doing, and Shipwright (which isn't really CI/CD focused but did some stuff around the actual build progress, but really suffered failure to launch).
My sense is that a lot of the container CI space just kind of assumes that every run starts from nothing or a generic upstream-supplied "stack:version" container and installs everything every time. And that's fine if your app is relatively small and the dependency footprint is, say, <1GB.
But if that's not the case (robotics, ML, gamedev, etc) or especially if you're dealing with a slow, non-parallel package manager like apt, that upfront dependency install starts to take up non-trivial time— particularly galling for a step that container tools are so well equipped to cache away.
I know depot helps a bunch with this by at least optimizing caching during build and ensuring the registry has high locality to the runner that will consume the image.
I’d be surprised if the systemd thing was not also true.
I think it’s quite likely Docker did not have a good handle on the “needs” of the enterprise space. That is Red Hats bread and butter; are you saying they developed all of that for no reason?
I don't feel like RedHat had to do anything to sell support contracts in this case, because that was already their business.
All they had to do was say they'll include container support as part of their contracts.
What they did do, AIUI based on feedback in the oss docker repos, is those contracts stipulated that you must run RHEL in the container and the host, and use systemd in the container in order to be "in support".
So that's kind of a self-feeding thing.
I don't feel like RedHat had to do anything to sell support contracts in this case, because that was already their business. All they had to do was say they'll include container support as part of their contracts.
Correct. Maybe starting with RHEL7, Red Hat took the stance that “containers are Linux”. Supporting Docker in RHEL7 was built-in as soon as we added it to ‘rhel-7-server-extras-rpms’ repo. The containers were supported as “customer workloads” while we docker daemon and cli were supported as part of the OS.
What they did do, AIUI based on feedback in the oss docker repos, is those contracts stipulated that you must run RHEL in the container and the host, and use systemd in the container in order to be "in support". So that's kind of a self-feeding thing.
Not quite right. RHEL containers (and now UBI containers) are only supported when they run on RHEL OS hosts or RHEL CoreOS hosts as part of an OpenShift cluster. systemd did not work (well?) in containers for a while and has not been ever a requirement. There’s several reasons for this RHEL containers on RHEL/RHCOS requirement. For one, RHEL/UBI containers inherit their subscription information from their host. This is much like how RHEL VMs can inherit their subscription if you have virtualization host-based subscriptions. If containers weren’t tied to their host, then by convention, each container would need to subscribe to Red Hat on instantiation and would consume a Red Hat subscription instance.
I was early container adopter at a large RHEL shop and they absolutely required us to use their forked version of docker for the daemon and RHEL based images with systemd.
This was mostly so containers could register with systems manager and count against our allowed systems.
We ignored them because it was so bad and buggy. This is when I switched to CoreOS for containerized workloads.
Definitely do not look at Kubernetes as a good example of go code and especially not how it handles deps (btw, it only relatively recently switched to go mods).
reply