They're basically necessary to do business in an advanced society that has rules governing every aspect of life. Those rules (even if good) often have unintended consequences and advocacy groups can help ensure that their industries are considered when rule-making.
Lobbying as practiced by these advocacy groups is basically American flavored corruption. If the system were in place anywhere else we’d call it corruption, but it’s our system so we call it Lobbying.
Yes but that ship has sailed, so now you need anti-corruption corruption to hopefully corrupt the corruption so it's less corrupt.
Like, the only reason "almond advocacy" even exists is because dairy and beef are some of the most lobbied and blessed industries in America. They can practically do whatever they want, whenever they want, so fuck you, pal.
this assumes that the needs of both the workers and the owners (who pay for the lobbying) are aligned, and they're often not
lobbyists will advocate for taking the water right from under the noses of the workers, and the workers will turn around and praise their employer for maintaining their jobs... it's often some kind of perverse shell game
at the end of the day the owners fly off to wherever with a pile of money and the workers are left without jobs or water — these false dichotomies of "if it weren't for lobbyists all jobs would be regulated away" is often used to disenfranchise people from actually changing these systems
I've built enough things to know that you don't build understanding by insulting people. You seem to disagree with my position, but are unable to find the words to defend yours.
This sounds a bit sensationalized, and I'm not sure if it's the source from which I originally learned of the issue, but:
>In a series of secret meetings in 1994, the Resnicks seized control of California’s public water supply. Now they’ve built a business empire by selling it back to working people.
Almond trees are stuck as almond trees forever. They can't switch to something else. Anywhere an investment in something is entrenched like this, you'll find lobbying.
I doubt most people would pass a financial audit, unless they had built their life around "proper accounting controls". Of course, if they had an incentive to do so, it probably would only take two or three years to get things in place and consistent. But then, most people don't have 1.3 Million active employees [1], who knows how many contractors and retirees. The DoD's finances were not designed around audit requirements and it takes time and consistent attention to retrofit accounting controls into a very large organization.
You’re so right, and that is exactly why we don’t have an entire class of jobs whose purpose is to get people to spend more than intended. Yup, all personal responsibility. 50% of people having less than a middle school education level, but you’re totally correct, it’s just ALL personal responsibility. No other factors at all!
I do think that the amount of regulation is proportional to the complexity of the society. While you can over or under regulate, the general future trend will be more regulations.
Or run no registry. Here's a port from a Dockerfile to just a vm:
FROM Debian
CMD apt-get install thing
CMD curl blabla/install.sh
Pretty much converts to:
aws-cli ec2 launch-instance
ssh user@server apt-get install thing
ssh user@server curl blabla/install.sh
In general, everytime you dispense of a high level abstraction, the solution is not to replicate the high level abstraction, but to build directly at a lower level abstraction.
If you want to replace burgers, just buy a slab of meat and put it in the fire or bake your own bread. You don't need to make preservants and buy artificial sweeteners, etc...
The thing with containerization is that it is sometimes used to virtualize an OS and sometimes to virtualize processes.
My containerless worflow, when compared to typical container workflows, usually involves splitting some of the responsibilities to the OS virtualization layer and some to the process layer.
For example, if I have a testing server and a prod server, to test a change I just git push to the testing branch. Which is quite fast and reproducible.
Yes in theory there can be side effects and leftover effects from the previous version, but I am also a competent programmer and have the capacity to ssh into the server to debug, so it's not a huge issue. So bottomline I don't virtualize as often.
To take a wildly different use case of containers, if I want to have two different systems on the same machine, I just run the two different systems as processes? You know there's a process for a sql db and an http server in the server and we are fine. You can even use users for more stringent encapsulation and security guarantees, it's fine.
But since we are talking about registries, I focused on the third distinct use case deployment automation.
The whole details on how to live without docker (and docker registries by extension) won't fit a hacker news comment, but be assured it's 100% possible and you'll be fine.
I'm focusing on docker as a whole because if I can prove that you don't need Docker, by extension I prove you don't need a docker registry. It's an overkill of an argument to show how ridiculous complaining about your free docker registries is. You are out here complaining about a problem with your Docker registries, while I'm a chad who can just axe Docker like it's nothing.
> The thing with containerization is that it is sometimes used to virtualize an OS and sometimes to virtualize processes.
You can use any tool wrong - that does not make the tool obsolete.
> The whole details on how to live without docker (and docker registries by extension) won't fit a hacker news comment
An explanation how to do the same with docker WOULD fit in a hacker news comment. Sometim, abstractions are useful, even if they add another layer on top of existing stuff (and more dependencies).
"An explanation how to do the same with docker WOULD fit in a hacker news comment. Sometim, abstractions are useful, even if they add another layer on top of existing stuff (and more dependencies)."
There's so many usecases for docker/( kernel)virtualization that you most certainly cannot fit it in a comment. But I would be entertained if you tried.
Virtualization is as generic and flexible as a programming language, you really can use it to do anything a Turing machine can do.
Careful now: people will start accusing you of NIH.
But I fully agree with you. Likely what you need is a tiny subset of the capabilities wrapped up by the higher level abstraction, so implement them directly.
Over time you may find you need additional capabilities (although that's far from a given) and, if and when you do, you'll need to make decisions about whether to implement them directly, or wrap everything in a higher level abstraction (or use a third party abstraction).
The point is that if you ever do need these additional capabilities there's a good chance it's because you've been successful enough to enter "good problem to have" territory because you didn't waste time getting distracted by them earlier on and instead chose to focus on work that enabled that success.
If you wanna NIH you can just build your own docker. It's just an abstraction around some newer syscalls for process isolation. There's really not much magic to be found if you look into how it's done.
You can probably have a working prototype up in a weekend if you've got some systems programming experience.
Just look at how much shorter and nicer the docker example is compared to the VM example. Also first example runs locally on any computer with docker or podman or whatever installed, second example exclusively runs on AWS.
"Just look at how much shorter and nicer the docker example is compared to the VM example."
Is this trolling? Who gives a shit? It's 3 lines that will be buried deep in the stack. You can even do it manually Gasp. and write the steps with screenshots in a word document or an email.
"lso first example runs locally on any computer with docker or podman or whatever installed, second example exclusively runs on AWS."
So we have a multiple GB full vendor neutral system that runs on any provider with support for a Free Operating system, or even your own machine. And you are getting hang up because the process for deploying that vendor neutral system is itself not vendor neutral?
This is what I was writing about getting hung up on the 1% last mile. It's going to draw so much effort to convert that last mile into a fully compliant vendor neutral solution, for almost no benefit, I just proved that you can port it in 3 seconds, if we migrate to GCP I just change the first line and you are done.
Furthermore as soon as you want to make this solution 100% compliant with whatever metric (in this case vendor neutrality), you introduce more dependencies with more stuff to make vendor neutral. In a sense you are now locked in to Docker, shouldn't we make an abstraction layer so that we can run this thing with Docker or Podman indistinguishably?
Get your focus back on the actual product you are building instead of how nice 3 lines look.
There's several use cases to virtualization. You might be under the illusion that you use docker for the same use cases, but you will probably find 3 or 4 use cases if you talk with other users: Deterministic testing, automated deployments, external dependency installation (docker run psql).
Besides the semantic usecases of Docker which can be infinite, technically docker is a virtualization and isolation mechanism. So whatever you do with docker can also be done with type 1, 2 virtualization or even processes and users (hint that's what docker is actually built on). There's plenty of junior programmers that learn to docker run something to isolate it without learning how to isolate with basic user permissions.
So the use case isn't really relevant when I say that you can live without docker.
In general if you ever find yourself complaining about your free stuff, I recommend that you uninstall it to show yourself you don't need it. And then you can reconsider to come back to it again from a place of gratefulness instead of demanding neediness.
>"Virtualization" is the act of making a set of processes believe that it has a dedicated system to itself.
>Full virtualization and paravirtualization are not the only approaches being taken, however. An alternative is lightweight virtualization, generally based on some sort of container concept.
>To this end, we describe the newFreeBSD ‘‘Jail’’facility,which provides a strong partitioning solution, leveraging existing mechanisms, such as chroot(2), to what effectively amounts to a virtual machine
environment.
> And there are plenty of junior programmers who think containers = virtualisation :)
At least we do agree on one thing, there is one and only one subpar engineer among us.
> At least we do agree on one thing, there is one and only one subpar engineer among us.
Indeed, the one who thinks containers are nothing but Docker.
Or better yet, the one who cites a PDF from 2000 or an article from 2006, both before ec2 even launched, to say virtualization is synonymous with containers as if the meaning hasn’t shifted since then…
Or, at the very least, the one who thinks “ssh apt-get install” is equivalent to a container image.
>Indeed, the one who thinks containers are nothing but Docker.
I just cited papers talking about jails, zones and chroot..
>to say virtualization is synonymous with containers
I said containers are virtualization, not that virtualization is synonymous with containers. That is container ∈ virtualization, not container = virtualization.
No offense but that's a highschool level reading comprehension error right there.
>Or better yet, the one who cites a PDF from 2000 or an article from 2006,
At least I cited stuff.
Go ahead and submit something to wikipedia with sources if you think containers are no longer virtualization.
what a profoundly useless comment. "why don't you do something else, unrelated, which doesn't solve any of the problems you have?" is absolutely the Ur-HN Reply.
Since virtualization has so many usecases, it's not possible to list all of the ways to do the thing without containerization (either with traditional virtualization or no virtualization at all), but be assured, you can do it, there's nothing special about containers, it's just another way to do things that you can choose not to use at all.
One of the core arguments is that DockerHub is the default for Docker. The article shows that the URL for dockerhub is baked into Docker and is used when no registry is specified.
Testing things on my laptop and deploying services to my home lab. Both benefit enormously from the minimal overhead, fast deployment, and ease of completely removing a service once I'm done with it.
Ah yes, that was one of the main selling points of docker early on, fixing the "works on my machine".
The way I solve that problem is by working directly on testing/staging servers which have the same specs as production.
I almost never run stuff on my local machine, and if I do it's a well isolated piece of code whose IO signature I know exactly and I can mock external systems with ease.
The point is Artifactory has basically all popular (and some not very popular) repository format support built-in while supporting serious traffic, sharding, replication, etc. so you don't have to hunt for and then maintain anything. They've got a good tool, it's just expensive.