Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It didn't help them that they rejected the traditionally successful ways of monetizing open source software. Which is, selling support contracts to large corporate users.

Corporate customers didn't like the security implications of the Docker daemon running as root, they wanted better sandboxing and management (cgroups v2), wanted to be able to run their own internal registries, didn't want to have docker trying to fight with systemd, etc.

Docker was not interested (in the early years) in adopting cgroups v2 or daemonless / rootless operation, and they wanted everyone to pay to use Dockerhub on the public internet rather than running their own internal registries, so docker-cli didn't support alternate registries for a long long time. And it seemed like they disliked systemd for "ideological" reasons to an extent that they didn't make much effort to resolve the problems that would crop up between docker and systemd.

Because Docker didn't want to build the product that corporate customers wanted to use, and didn't accept patches when Red Hat tried to get them implemented those features themselves, eventually Red Hat just went out and built up Podman, Quay, and the entire ecosystem of tooling that those corporate customers wanted themselves (and sold it to them). That was a bit of an own goal.



Absolutely none of this is true. Docker had support contracts (Docker EE... and trying to remember, docker-cs before that naming pivot?).

Corporate customers do not care about any of the things you mentioned. I mean, maybe some, but in general no. That's not what corps think about.

There was never "no interest" at Docker in cgv2 or rootless. Never. cgv2 early on was not useable. It lacked so much functionality that v1 had. It also didn't buy much, particularly because most Docker users aren't manually managing cgroups themselves.

Docker literally sold a private registry product. It was the first thing Docker built and sold (and no, it was not late, it was very early on).


for the record, cpuguy83 was in the trenches at docker circa 2013, it was like him a handful of other people working on docker when it went viral, he has an extremely insiders perspective, I'd trust what he says.


I mean you can say that, but on the topic of rootless, regardless of "interest" at Docker, they did nothing about it. I was at Red Hat at the time, a PM in the BU that created podman, and Docker's intransigence on rootless was probably the core issue that led to podman's creation.


I've really appreciated RH's work both on podman/buildah and in the supporting infrastructure like the kernel that enables nesting, like using buildah to build an image inside a containerized CI runner.

That said, I've been really surprised to not see more first class CI support for a repo supplying its own Dockerfile and being like "stage 1 is to rebuild the container", "stage two is a bunch of parallel tests running in instances of the container". In modern Dockerfiles it's pretty easy to avoid manual cache-busting by keying everything to a package manager lockfile, so it's annoying that the default CI paradigm is still "separate job somewhere that rebuilds a static base container on a timer".


Yeah, I've moved on from there, but I agree. There wasn't a lot of focus on the CI side of things beyond the stuff that ArgoCD was doing, and Shipwright (which isn't really CI/CD focused but did some stuff around the actual build progress, but really suffered failure to launch).


My sense is that a lot of the container CI space just kind of assumes that every run starts from nothing or a generic upstream-supplied "stack:version" container and installs everything every time. And that's fine if your app is relatively small and the dependency footprint is, say, <1GB.

But if that's not the case (robotics, ML, gamedev, etc) or especially if you're dealing with a slow, non-parallel package manager like apt, that upfront dependency install starts to take up non-trivial time— particularly galling for a step that container tools are so well equipped to cache away.

I know depot helps a bunch with this by at least optimizing caching during build and ensuring the registry has high locality to the runner that will consume the image.


That's true, we didn't do much around it. Small startup with monetization problems and all.


So absolutely at least some of that is true.

I’d be surprised if the systemd thing was not also true.

I think it’s quite likely Docker did not have a good handle on the “needs” of the enterprise space. That is Red Hats bread and butter; are you saying they developed all of that for no reason?


I made no comment about RedHat's offerings.

I don't feel like RedHat had to do anything to sell support contracts in this case, because that was already their business. All they had to do was say they'll include container support as part of their contracts.

What they did do, AIUI based on feedback in the oss docker repos, is those contracts stipulated that you must run RHEL in the container and the host, and use systemd in the container in order to be "in support". So that's kind of a self-feeding thing.


   I don't feel like RedHat had to do anything to sell support contracts in this case, because that was already their business. All they had to do was say they'll include container support as part of their contracts.
Correct. Maybe starting with RHEL7, Red Hat took the stance that “containers are Linux”. Supporting Docker in RHEL7 was built-in as soon as we added it to ‘rhel-7-server-extras-rpms’ repo. The containers were supported as “customer workloads” while we docker daemon and cli were supported as part of the OS.

   What they did do, AIUI based on feedback in the oss docker repos, is those contracts stipulated that you must run RHEL in the container and the host, and use systemd in the container in order to be "in support". So that's kind of a self-feeding thing.
Not quite right. RHEL containers (and now UBI containers) are only supported when they run on RHEL OS hosts or RHEL CoreOS hosts as part of an OpenShift cluster. systemd did not work (well?) in containers for a while and has not been ever a requirement. There’s several reasons for this RHEL containers on RHEL/RHCOS requirement. For one, RHEL/UBI containers inherit their subscription information from their host. This is much like how RHEL VMs can inherit their subscription if you have virtualization host-based subscriptions. If containers weren’t tied to their host, then by convention, each container would need to subscribe to Red Hat on instantiation and would consume a Red Hat subscription instance.

https://access.redhat.com/articles/2726611


I was early container adopter at a large RHEL shop and they absolutely required us to use their forked version of docker for the daemon and RHEL based images with systemd.

This was mostly so containers could register with systems manager and count against our allowed systems.

We ignored them because it was so bad and buggy. This is when I switched to CoreOS for containerized workloads.


I've worked in build/release engineering/devops for a long time.

I would be utterly shocked if corporate customers wouldn't want corporate Docker proxies/caches/mirrors.

Entire companies have been built on language specific artifact repositories. Generic ones like Docker are even more sought after.


Right, and Docker sold such products and from early on.


When Docker was new I had a really bad ADSL connection (2Mbps) and couldn't ever stack up a containerized system properly because Dockerhub would time out.

I did large downloads all the time, I used to download 25GB games for my game consoles for instance. I just had to use schedule them and use tools that could resume downloads.

If I'd had a local docker hub I might have used docker but because I didn't it was dead to me.


Unfortunately even podman etc.. are still limited by OCIs decision to copy the Docker model.

Crun just stamp couples security profiles as an example, so everything in the shared kernel that is namespace incompatible is enabled.

This is why it is trivial to get in-auditable communication between pods on a host etc…


> Unfortunately even podman etc.. are still limited by OCIs decision to copy the Docker model.

Which parts of the model are you referring to ?


OCI Container Runtimes like OCI's runc are "container runtimes", so the runtime spec[2]

Basically, docker started using lxc, but wanted a go native option, and wrote runc. If you look at [0] you can see how it actually instantiates the container. Here is a random blog that describes it fairly well [1]

crun is the podman related project written in c, which is more efficient than the go based runc.

You can try this even as the user nobody 65534:65534, but you may need to make some dirs, or set envs.

Here is an example pulling an image with podman to make it easier, but you could just make an OCI spec bundle and run it:

    mkdir hello
    cd hello
    podman pull docker.io/hello-world
    podman export $(podman create hello-world) > hello-world.tar
    mkdir rootfs
    tar -C rootfs -xf hello-world.tar
    runc spec --rootless
    sed -i 's;"sh";"/hello";' config.json
    runc run container1
    
    Hello from Docker!
runc doesn't support any form of constraints like a bounding set on seccomp, selinux, apparmor, etc.. but it will apply profiles you pass it.

Basically it fails open, and with the current state of apparmor and selinux it is trivial to bypass the minimal userns restrictions they place.

Historically, before rootless containers this was less of an issue, because you had to be a privileged user to launch a container. But with the holes in the LSMs, no ability to set administrative bounding sets, and the reality that none of the defaults constrain risky kernel functionality like vsock, openat2 etc... there are a million ways to break netns isolation etc...

Originally the docker project wanted to keep all the complexity of mutating LSM rules etc... in containerd. and they also fought even basic controls like letting an admin disable the `--privileged` flag at the daemon level.

Unfortunately due to momentum, opinions, and friction in general, that means that now those container runtimes have no restrictions on callers, and cannot set reasonable defaults.

Thus now we have to resort to teaching every person who launches a container to be perfect and disable everything, which they never do.

If you run a k8s cluster with nodes on VMs, try this for example, if it doesn't error out, any pod can talk to any other pod on the node, with a protocol you aren't logging, and which has limited ability to log anyway. (if your k8s nodes are running systemd v256+ and you aren't using containerd which blocked vsock, but cri-o, podman, etc... don't (at least up to a couple of weeks ago)

    socat - VSOCK-LISTEN:3000
You can also play around with other af_families as IPX, Appletalk, etc... are all available by default, or see if you can use openat2 to use some file in /proc to break out.

[0] https://manpages.debian.org/testing/runc/runc-spec.8.en.html [1] https://mkdev.me/posts/the-tool-that-really-runs-your-contai... [2] https://github.com/opencontainers/runtime-spec/blob/main/REA...


> Crun just stamp couples security profiles

I don't understand any of this :-)


I can't help but see a parallel with some of the entertainment franchises in recent years (Star Wars, etc.) -- where a company seems to be allergic to taking money by giving people what they want, and instead insists on telling people what they should want and blaming them when they don't


yes; its really notable that corporates and other support companies (e.g. redhat) don't want to start down the path of NIH, and will go to significant efforts to avoid it. However, once they have done it, it is very hard to make them come back.


I think the Star Wars problem was that instead of making the movies at a steady cadence they stretched it out too long.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: