Programming is not something you can teach to people who are not interested in it in the first place. This is why campaigns like "Learn to code" are doomed to fail.
Whereas (good) programmers strive to understand the domain of whatever problem they're solving. They're comfortable with the unknown, and know how to ask the right questions and gather requirements. They might not become domain experts, but can certainly learn enough to write software within that domain.
Generative "AI" tools can now certainly help domain experts turn their requirements into software without learning how to program, but the tech is not there yet to make them entirely self-sufficient.
So we'll continue to need both roles collaborating as they always have for quite a while still.
Hhmm I think that's more difficult than using these tools for creating software. If generated software doesn't compile, or does the wrong thing, you know there's an issue. Whereas if the LLM gives you seemingly accurate information that is actually wrong, you have no way of verifying it, other than with a human domain expert. The tech is not reliable enough for either task yet, but software is easy to verify, whereas general information is not.
This type of software is mainly created to gain brand recognition, influence, or valuation, not to solve problems for humans. Its value is indirect and speculative.
These are the pets.com of the current bubble, and we'll be flooded by them before the damn thing finally pops.
Except you got it all the time, just not as polite. Under every Simon Willison article you can see people call him grifter. Even under Redis developer's post you can see people insulting him for being pro-AI.
> otherwise you can't really have a default value, because there's no way to tell if a given zero was explicit or implicit
You can use pointers, or nullable types. These are not ideal, admittedly, but it's not true that "there's no way".
> there's no way to ensure that every field gets filled in
This can also be done with an exhaustive linter. You might think this isn't great either, but then again, always being reminded that you left out some fields is a) annoying, and b) goes against the benefit of default values altogether.
I agree with you on immutability, though.
I also agree with some of the points in the article, and have my own opinions about things I would like Go to do differently. But if we can agree that all programming languages have warts and that language designers must make tradeoffs, I would say that Go manages to make the right tradeoffs to be an excellent choice for some tasks, a good choice for many tasks, and a bad choice for a few tasks. That makes it my favorite language by a wide margin, though that's also a matter of opinion.
This is not about mindless worship, but about the fact that the UNIX design has stood the test of time for this long, and is still a solid base compared to most other operating systems. Sure, there are more modern designs that improve on security and capability (seL4/Genode/Sculpt, Fuchsia), but none are as usable or accessible as UNIX.
So when it comes to projects that teach the fundamentals of GNU/Linux, such as LFS, overwhelming the user with a large amount of user space complexity is counterproductive to that goal. I would argue that having GNOME and KDE in BLFS is largely unnecessary and distracting as well, but systemd is core to this issue. There are many other simpler alternatives to all of this software that would be more conducive to learning. Users can continue their journey with any mainstream distro if they want to get familiar with other tooling. LFS is not the right framework for building a distribution, nor should it cover all software in the ecosystem.
The first version of UNIX was released in 1971 and the first version of Windows NT in 1993. So UNIX is only about 60% older than NT. Both OSes have "stood the test of time", though one passed it with a dominant market share, whereas the other didn't. And systemd is heavily inspired by NT.
Time flies fast, faster than recycled arguments. :)
I'm confused as to which OS is the one that passed the other with dominant market share. Last I checked, Linux is everywhere, and Windows just keeps getting worse with every iteration.
I'm not sure I'd be smugly pronouncing anything about the superiority of Windows if I were a Microsoft guy today.
It's not surprising that systemd was heavily inspired by NT. That's exactly what Poettering was paid to create, by his employer Microsoft. (Oh, sorry--RedHat, and then "later" Microsoft.)
Respectfully, that's nonsense. Linux is directly inspired by Unix (note: lowercase) and Minix, shares many of their traits (process and user model, system calls, shells, filesystem, small tools that do "one thing well", etc.), and closely follows the POSIX standard. The fact that it's not a direct descendant of commercial Unices is irrelevant.
In fact, what you're saying here contradicts that Rob Pike quote you agree with, since Linux is from the 1990s.
But all of this is irrelevant to the main topic, which is whether systemd should be part of a project that teaches the fundamentals of GNU/Linux. I'll reiterate that it's only a distraction to this goal.
I'm not familiar with what UNIX or its modern descendants have or have not implemented. But why should Linux mimic them? Linux is a Unix-like, and a standalone implementation of the POSIX standard. The init system is implementation-specific, just like other features. There has been some cross-system influence, in all directions (similar implementations of FUSE, eBPF, containers, etc.), but there's no requirement that Linux must follow what other Unices do.
If you're going to argue that Linux implementing systemd is a good idea because it's following the trend in "proper" UNIX descendants, then the same argument can be made for it following the trend of BSD-style init systems. It ultimately boils down to which direction you think is better. I'm of the opinion that simple init systems, of which there are plenty to choose from, are a better fit for the Linux ecosystem than a suite of tightly coupled components that take over the entire system. If we disagree on that, then we'll never be on the same page.
I strongly doubt this tool is nearly as popular as it appears to be. GitHub stars can be bought and social media is ridden with bots. On the dead internet it is cheap and trivial to generate fake engagement in order to reel in curious humans and potential victims.
I suspect this entire thing is a honeypot setup by scammers. It has all the tells: virality, grand promises, open source, and even the word "open" in the name. Humans should get used this being the new normal on the internet. Welcome to the future.
That's not what I mean. Of course the buzz will reach mainstream media if everyone on social media seems to be talking about it.
What I mean is that the virality was bootstrapped by bots, which in turn was spread by humans. Virality can be maintained entirely by bots now, to give the appearance that there are more users than there actually are. But I doubt that the amount of humans using it is anywhere close to what the amount of engagement suggests. Which wouldn't be suprising considering the project is all about a large number of autonomous agents that interact with online services. It's a bot factory.
It's absolutely absurd that GitHub hasn't addressed it, to be honest. Right now it has 140k stars: more than foundational frameworks like Laravel or Express or universal tooling like ESLint or the Rust compiler.
> Qualcomm straight up refuses to support chips through this many Android releases.
That's not entirely accurate. They do provide chips with extended support, such as the QCM6490 in the Fairphone 5. These are not popular because most of the market demands high performance, and companies profit from churning out products every year, but solutions exist for consumers who value stability and reliability over chasing trends and specs.
The scenarios you mentioned are indeed nice use cases of ZFS, but other tools can do this too.
I can make snapshots and recover files with SnapRAID or Kopia. In the case of a laptop system drive failure, I have scripts to quickly setup a new system, and restore data from backups. Sure, the new system won't be a bit-for-bit replica of the old one, and I'll have to manually tinker to get everything back in order, but these scenarios are so uncommon that I'm fine with this taking a bit more time and effort. I'd rather have that over relying on a complex filesystem whose performance degrades over time, and is difficult to work with and understand.
You speak about ZFS as if it's a silver bullet, and everything else is inferior. The reality is that every technical decision has tradeoffs, and the right solution will depend on which tradeoffs make the most sense for any given situation.
How often do you test your OS replication script? I used to do that too, and every time there was always something broken, outdated, or needing modification, often right when I desperately needed a restore because I was about to leave on a business trip and had a flight to catch with a broken laptop disk.
How much time do you spend setting up a desktop and maintaining it with mdraid+LUKS+LVM+your choice of filesystem, replacing a disk and doing the resilvering, or making backups with SnapRAID/Kopia etc? Again, I used to do that. I stopped after finding better solutions, also because I always had issues during restores, maybe small ones, but they were there, and when it's not a test but a real restore, the last thing you want is problems.
Have you actually tested your backup by doing a sudden, unplanned restore without thinking about it for three days before? Do you do it at least once a year to make sure everything works, or do you just hope that since computers rarely fail and restores take a long time, everything will work when you need it? When I did things like you and others I know who still do it, practically no one ever tested their restore, and the recovery script was always one distro major release behind. You had to modify it every few releases when doing a fresh install. In the meantime, it's "hope everything goes well or spend a whole day scrambling to fix things."
Maybe a student is okay with that risk and enjoys fixing things, but generally, it's definitely not best practice and that's why most are on someone else's computer, called the cloud, as protection from their IT choices...
> How often do you test your OS replication script?
Not often. It's mostly outdated, and I spend a lot of time bringing it up to date when I have to rely on it.
BUT I can easily understand what it does, and the tools it uses. In practice I use it rarely, so spending a few hours a year updating it is not a huge problem. I don't have the sense of urgency you describe, and when things do fail, it's an extraordinary event where everything else can wait for me to be productive again. I'm not running a critical business, these are my personal machines. Besides, I have plenty of spare machines I can use while one is out of service.
This is the tradeoff I have decided to make, which works for me. I'm sure that using ZFS and a reproducible system has its benefits, and I'm trying to adopt better practices at my own pace, but all of those have significant drawbacks as well.
> Have you actually tested your backup by doing a sudden, unplanned restore without thinking about it for three days before?
No, but again, I'm not running a critical business. Things can wait. I would argue that even in most corporate environments the obsession over HA comes at the expense of operational complexity, which has a greater negative impact than using boring tools and technology. Few companies need Kubernetes clusters and IaC tools, and even fewer people need ZFS and NixOS for personal use. It would be great if the benefits of these tools were accessible to more people with less drawbacks, but the technology is not there yet. You shouldn't gloss over these issues because they're not issues for you.
Most companies have terrible infrastructure; they're hardly ever examples to follow. But they also have it because there's a certain widespread mentality among those who work there, which originates on the average student's desktop, where they play with Docker instead of understanding what they're using. This is the origin of many modern software problems: the lack of proper IT training in universities.
MIT came up with "The Missing Semester of Your CS Education" to compensate, but it's nothing compared to what's actually needed. It's assumed that students will figure it out on their own, but that almost never happens, at least not in recent decades. It's also assumed that it's something easy to do on your own, that it can be done quickly, which is certainly not the case and I don't think it ever has been. But the teacher who doesn't know is the first to have that bias.
The exceptional event, even if it doesn't require such a rapid response, still reveals a fundamental problem in your setup. So the question should be: why maintain this complex script when you can do less work with something else? NixOS and Guix are tough nuts to crack at first: NixOS because of its language and poor/outdated/not exactly well-done documentation; Guix because its development is centered away from the desktop and it lacks some elements common in modern distros, etc. But once you learn them, there's much less overhead to solve problems and keep everything updated, much less than maintaining custom scripts.
I'm currently troubleshooting an issue on my Proxmox server with very slow read speeds from a ZFS volume on an NVMe disk. The disk shows ~7GBps reads outside of ZFS, but ~10MBps in a VM using the ZFS volume.
I've read other reports of this issue. It might be due to fragmentation, or misconfiguration, or who knows, really... The general consensus seems to be that performance degrades after ~80% utilization, and there are no sane defragmentation tools(!).
On my NAS, I've been using ext4 with SnapRAID and mergerfs for years without issues. Being able to use disparate drives and easily expand the array is flexible and cost effective, whereas ZFS makes this very difficult and expensive.
So, thanks, but no thanks. For personal use I'll keep using systems that are not black boxes, are reliable, and performant for anything I'd ever need. What ZFS offers is powerful, but it also has significant downsides that are not worth it to me.
Honestly, pre-made containers are usually black boxes and also a huge waste of resources. If anything, your problem is not using NixOS or Guix, which means you have no reason to waste resources with Proxmox and maintain a massive attack surface thanks to ready-made containers from who knows who, maybe even with their forgotten SSH keys left inside, with dependencies that haven't been updated in ages because whoever made them works in Silicon Valley mode, etc.
First of all, I don't see how containers are inherently black boxes or a waste of resources. They're a tool to containerize applications, which can be misused as anything else. If you build your own images, they can certainly be lightweight and transparent. They're based on well known and stable Linux primitives.
Secondly, I'm not using containers at all, but VMs. I build my own images, mainly based on Debian. We can argue whether Linux distros are black boxes, but I would posit that NixOS and Guix are even more so due to their esoteric primitives.
Thirdly, I do use NixOS on several machines, and have been trying to setup a Guix system for years now. I have a love/hate relationship with NixOS because when things go wrong—and they do very frequently—the troubleshooting experience is a nightmare, due to the user hostile error messages and poor/misleading/outdated/nonexistent documentation.
By "black box" I was referring to the black magic that powers ZFS. This is partly due to my own lack of familiarity with it, but whenever I've tried to learn more or troubleshoot an issue like the performance degradation I'm experiencing now, I'm met with confusing viewpoints and documentation. So given this, I'm inclined to use simpler tools that I can reasonably understand which have given me less problems over the years.
Ugh, containers/VMs are black boxes because in common practice you just pull the image as-is without bothering to study what's inside, without checking things like outdated dependencies left behind, some dev's forgotten SSH keys, and so on. There are companies that throw the first image they find from who-knows-who into production just because "it should have what I'm looking for"...
Are they knowable? Yes, but in practice they're unknown.
They waste resources because they duplicate storage, consume extra RAM, and so on to keep n common elements separate, without adding any real security, and with plenty of holes punched here and there to make the whole system/infra work.
This is also a terrible thing in human terms, led to a false sense of security. Using full-stack virtualization increases the overhead on x86 even more with no substantial benefit as well.
ZFS has a codebase that's not easy, sure, but using it is dramatically simple. On GNU/Linux the main problem is not being a first-class citizen due to the license and being a port from another OS, not something truly native even though a lot has been done to integrate it. But `zpool create mypool mirror /dev/... /dev/...` is definitely simple, as is `zfs create mypool/myvol` and so on... Compared to mdadm+luks+{pv,vg,lv}* etc. there's no comparison, it's damn easier and clearer.
GitHub stars are not a reliable metric[1]. Neither is engagement on social media, which is ridden with bots. It would be safe to assume that a project promoting bots is also using them to appear popular.
This whole thing is a classic pump and dump scheme, which this technology has made easier and more accessible than ever. I wouldn't be surprised if the malware authors are the same people behind these projects.
Programming is not something you can teach to people who are not interested in it in the first place. This is why campaigns like "Learn to code" are doomed to fail.
Whereas (good) programmers strive to understand the domain of whatever problem they're solving. They're comfortable with the unknown, and know how to ask the right questions and gather requirements. They might not become domain experts, but can certainly learn enough to write software within that domain.
Generative "AI" tools can now certainly help domain experts turn their requirements into software without learning how to program, but the tech is not there yet to make them entirely self-sufficient.
So we'll continue to need both roles collaborating as they always have for quite a while still.
reply