Hacker Newsnew | past | comments | ask | show | jobs | submit | cpuguy83's commentslogin

Not seeing complaints doesn't mean they don't exist. Not to mention ui latency that is common in electron apps that is just a low-level constant annoyance.

That does not exist in the US as far as I've seen.

There's Amazon's "just walk out" stuff, which they just killed.


It compiles fast, starts up fast, and doesn't have a ton of hoops to jump through (ie borrower/checker in rust).

I haven't personally used it, but containerd also has "nerdbox": http://github.com/containerd/nerdbox

Docker can run rootless the same way podman does.

Now. I was at Red Hat at the time, in the BU that built podman, and Docker was just largely refusing any of Red Hat's patches around rootless operation, and this was one of the top 3, if not the top motivation for Red Hat spinning up podman.

You'd have to point me to those PR's, I don't recall anything specifically around rootless. I recall a lot of things like a `--systemd` flag to `docker run`, and just general things that reduce container security to make systemd fit in.

Ah the classic "it's a terrible idea until you implement it elsewhere and show us up".

Absolutely none of this is true. Docker had support contracts (Docker EE... and trying to remember, docker-cs before that naming pivot?).

Corporate customers do not care about any of the things you mentioned. I mean, maybe some, but in general no. That's not what corps think about.

There was never "no interest" at Docker in cgv2 or rootless. Never. cgv2 early on was not useable. It lacked so much functionality that v1 had. It also didn't buy much, particularly because most Docker users aren't manually managing cgroups themselves.

Docker literally sold a private registry product. It was the first thing Docker built and sold (and no, it was not late, it was very early on).


for the record, cpuguy83 was in the trenches at docker circa 2013, it was like him a handful of other people working on docker when it went viral, he has an extremely insiders perspective, I'd trust what he says.

I mean you can say that, but on the topic of rootless, regardless of "interest" at Docker, they did nothing about it. I was at Red Hat at the time, a PM in the BU that created podman, and Docker's intransigence on rootless was probably the core issue that led to podman's creation.

I've really appreciated RH's work both on podman/buildah and in the supporting infrastructure like the kernel that enables nesting, like using buildah to build an image inside a containerized CI runner.

That said, I've been really surprised to not see more first class CI support for a repo supplying its own Dockerfile and being like "stage 1 is to rebuild the container", "stage two is a bunch of parallel tests running in instances of the container". In modern Dockerfiles it's pretty easy to avoid manual cache-busting by keying everything to a package manager lockfile, so it's annoying that the default CI paradigm is still "separate job somewhere that rebuilds a static base container on a timer".


Yeah, I've moved on from there, but I agree. There wasn't a lot of focus on the CI side of things beyond the stuff that ArgoCD was doing, and Shipwright (which isn't really CI/CD focused but did some stuff around the actual build progress, but really suffered failure to launch).

My sense is that a lot of the container CI space just kind of assumes that every run starts from nothing or a generic upstream-supplied "stack:version" container and installs everything every time. And that's fine if your app is relatively small and the dependency footprint is, say, <1GB.

But if that's not the case (robotics, ML, gamedev, etc) or especially if you're dealing with a slow, non-parallel package manager like apt, that upfront dependency install starts to take up non-trivial time— particularly galling for a step that container tools are so well equipped to cache away.

I know depot helps a bunch with this by at least optimizing caching during build and ensuring the registry has high locality to the runner that will consume the image.


That's true, we didn't do much around it. Small startup with monetization problems and all.

So absolutely at least some of that is true.

I’d be surprised if the systemd thing was not also true.

I think it’s quite likely Docker did not have a good handle on the “needs” of the enterprise space. That is Red Hats bread and butter; are you saying they developed all of that for no reason?


I made no comment about RedHat's offerings.

I don't feel like RedHat had to do anything to sell support contracts in this case, because that was already their business. All they had to do was say they'll include container support as part of their contracts.

What they did do, AIUI based on feedback in the oss docker repos, is those contracts stipulated that you must run RHEL in the container and the host, and use systemd in the container in order to be "in support". So that's kind of a self-feeding thing.


   I don't feel like RedHat had to do anything to sell support contracts in this case, because that was already their business. All they had to do was say they'll include container support as part of their contracts.
Correct. Maybe starting with RHEL7, Red Hat took the stance that “containers are Linux”. Supporting Docker in RHEL7 was built-in as soon as we added it to ‘rhel-7-server-extras-rpms’ repo. The containers were supported as “customer workloads” while we docker daemon and cli were supported as part of the OS.

   What they did do, AIUI based on feedback in the oss docker repos, is those contracts stipulated that you must run RHEL in the container and the host, and use systemd in the container in order to be "in support". So that's kind of a self-feeding thing.
Not quite right. RHEL containers (and now UBI containers) are only supported when they run on RHEL OS hosts or RHEL CoreOS hosts as part of an OpenShift cluster. systemd did not work (well?) in containers for a while and has not been ever a requirement. There’s several reasons for this RHEL containers on RHEL/RHCOS requirement. For one, RHEL/UBI containers inherit their subscription information from their host. This is much like how RHEL VMs can inherit their subscription if you have virtualization host-based subscriptions. If containers weren’t tied to their host, then by convention, each container would need to subscribe to Red Hat on instantiation and would consume a Red Hat subscription instance.

https://access.redhat.com/articles/2726611


I was early container adopter at a large RHEL shop and they absolutely required us to use their forked version of docker for the daemon and RHEL based images with systemd.

This was mostly so containers could register with systems manager and count against our allowed systems.

We ignored them because it was so bad and buggy. This is when I switched to CoreOS for containerized workloads.


I've worked in build/release engineering/devops for a long time.

I would be utterly shocked if corporate customers wouldn't want corporate Docker proxies/caches/mirrors.

Entire companies have been built on language specific artifact repositories. Generic ones like Docker are even more sought after.


Right, and Docker sold such products and from early on.

It requires cap_bpf which is considered a high privileged capability.

So yes, it requires root in the sense of what people mean by root.


You can also enable unpriviledged ebpf.


Definitely do not look at Kubernetes as a good example of go code and especially not how it handles deps (btw, it only relatively recently switched to go mods).

Not to say it is a bad project. Not at all.


I mean, that's exactly what you are doing with every single dependency you take on regardless of language.


KDE Connect with iOS, while useful, is terrible.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: