Out of interest, how is that relevant? Are we not able to criticize a FOSS maintainers response unless we run a project of scale ourselves? The maintainer is clearly engaging and knows what the problem is but stalls on the "last mile" which is issue creation. Do you agree?
wolfSSL also sells commercial licenses so it's not like they're going uncompensated for their work. Regardless, we shouldn't put people on pedestals because their title is "FOSS maintainer"
You know a social movement went full circle when a criticism that is so scathing, you couldn't have possibly come up with it and make it trend before, even if you gave it your all, is now a motto and a point of pride for those who follow it.
This is happening at the same time where hundreds of millions of regular variety consumers are being fed propaganda daily about how it's "finally time to switch to Linux", because it's so much better for them, the individual. If only they knew it's apparently not actually about them, never has been, and never will be.
When exactly is 'before'? Before Github existed to put front and center your code and its issues? Before it became an expectation to have a a rich Github profile when you're considered for a job position?
Of course I wouldn't have been able to come up with this statement because the perverted view of OSS devs owing free work to the users of their software was not so pervaisive.
On your edit: a bit rich saying the calls for switching to Linux propaganda, especially with the downturn of UX of windows and macos... Also why just hundreds of millions.. Go for hundreds of billions if you're just going to pull out numbers. Apart from that - even if Linux is not about the users, it is in many cases better for them as-is. Funny how that works with no conflict.
> Also why just hundreds of millions.. Go for hundreds of billions if you're just going to pull out numbers
You see, that would be because I did not just pull out an arbitrary number. "How many Windows users there are" is a reported fact you can just search for, and even the total is not "billions" (plural). I know, I was surprised too. From the horse's mouth: https://blogs.windows.com/windowsexperience/2025/06/24/stay-...
My first comment on this site pointing out that a FOSS user sounds entitled is from 2021. I've been saying it outside the site for 10+ years, spanning back to the time when it wasnt cringe to have a Github sticker on your laptop.
I maintain several FOSS projects, although none as popular as wolfssl and if I want to make a new issue to make it more clean, I usually do it myself, because then I can write it the way I want, and include the information, and only the information, that I think is important. If I ask someone else to do it, there's a pretty good chance they won't write it the way I would like, if they write it up at all.
That's actually impossible to answer. I maintain or contribute to or have contributed to several FOSS projects whose number varies depending on how you want to count them, and neither myself nor anyone else who contributes to any FOSS project has the faintest idea how many people use them, especially if they're included in widely-used distros where the number is anything from zero to $number_of_distro_users.
Again: the maintainer does not say there is no bug. He says: please open a new issue, with a proper title and description for the actual underlying problem. Is that seriously too much to ask? Instead, the guy writes a whole blog post shitting on the project. Does anyone still wonder why people burn out on maintaining FOSS projects?
For both of them! Since both of them are aware now, either one could open that ticket. If the maintainer has very specific ideas about how a ticket should look, maybe they can do that themselves quickly, now that they are aware of not complying with the RFC. Then the ticket will perfectly match their expectations.
The maintainer is usually also the one who has to trace the root cause, which in this case the issue reporter did, which is certainly more work than creating an issue according to the formatting and other requirements the maintainer may have. So in that light, the reporter of the issue already did a big chunk of work for the maintainer or the project. I wouldn't really call them acting "entitled" after that. Clearly they put in effort more than could be expected already.
Exactly, that's all his PR had to be. The history of finding the issue could be an interesting story (I bet it involves Elixir!), but in places it reads as almost malicious. If I received a PR anything like that on something I maintained, it would be received very poorly. The author comes off as overly aggressive toward the maintainers and far too sensitive to their response.
> Many of us believe on automatic memory management for systems programming
The problem is the term "systems programming". For some, it's kernels and device drivers. For some, it's embedded real-time systems. For some, it's databases, game engines, compilers, language run-times, whatever.
There is no GC that could possibly handle all these use-cases.
Why would you have to switch languages? There are no languages with 'no GC', there are only languages with no GC by default.
Take C - you can either manually manage your memory with malloc() and free(), or you can #include a GC library (-lgc is probably already on your system), and use GC_malloc() instead. Or possibly mix and match, if you're bold and have specific needs.
And if ever some new revolutionary GC method is developed, you can just replace your #include. Cutting-edge automatic memory management forever.
Except there is, only among GC-haters there is not.
People forget there isn't ONE GC, rather several of possible implementations depending on the use case.
Java Real-Time GC implementations are quite capable to power weapon targeting systems in the battlefield, where a failure causes the wrong side to die.
> Aonix PERC Ultra Virtual Machine supports Lockheed Martin's Java components in Aegis Weapon System aboard guided missile cruiser USS Bunker Hill
Look, when someone says "There's no thing that could handle A,B,C, and D at the same time", answering "But there's one handling B" is not very convincing.
(Also, what's with this stupid "hater" thing, it's garbage collection we're talking about, not war crimes)
It is, because there isn't a single language that is an hammer for all types of nails.
It isn't stupid, it is the reality of how many behave for decades.
Thankfully, that issue has been slowly sorting out throughout generation replacement.
I already enjoy that nowadays we already have reached a point in some platforms where the old ways are nowadays quite constrained to a few scenarios and that's it.
> Are there technical reasons that Rust took off and D didn't?
As someone who considered it back then when it actually stood a chance to become the next big thing, from what I remember, the whole ecosystem was just too confusing and simply didn't look stable and reliable enough to build upon long-term. A few examples:
* The compiler situation: The official compiler was not yet FOSS and other compilers were not available or at least not usable. Switch to FOSS happened way too late and GCC support took too long to mature.
* This whole D version 1 vs version 2 thingy
* This whole Phobos vs Tango standard library thingy
* This whole GC vs no-GC thingy
This is not a judgement on D itself or its governance. I always thought it's a very nice language and the project simply lacked man-power and commercial backing to overcome the magical barrier of wide adoption. There was some excitement when Facebook picked it up, but unfortunately, it seems it didn't really stick.
I think people forget this. I know a lot of folks that looked at D back when it needed to win mindshare to compete with the currently en vogue alternatives, and every one of them nope'd out on the licensing. By the time they FOSS'ed it, they'd all made decisions for the alternative, and here we are.
FOSS: DMD was always open source, but the backend license was not compatible with FOSS until about 2017. D is now officially part of GCC (as of v6 I think?), and even the frontend for D in gcc is written in D (and actively maintained).
D1 vs. D2: D2 introduced immutability and vastly superior metaprogramming system. But had incompatibilities with D1. Companies like sociomantic that standardized on D1 were left with a hard problem to solve.
Tango vs phobos: This was a case of an alternative standard library with an alternative runtime. Programs that wanted to use tango and phobos-based libraries could not. This is what prompted druntime, which is tango's runtime split out and made compatible, adopted by D2. Unforutuntately, tango took a long time to port to D2 and the maintainers went elsewhere.
gc vs. nogc: The language sometimes adds calls to the gc without obvious invokations of it (e.g. allocating a closure or setting the length of an array). You can write code with @nogc as a function attribute, and it will ban all uses of the gc, even compiler-generated ones. This severely limits the runtime features you can use, so it makes the language a lot more difficult to work with. But some people insist on it because it helps avoid any GC pauses when you can't take it. There are those who think the whole std lib should be nogc, to maximize utility, but we are not going in that direction.
The greatness of human accomplishment has always been measured by size. The bigger, the better. Until now. Nanotech. Smart cars. Small is the new big. In the coming months, Hooli will deliver Nucleus, the most sophisticated compression software platform the world has ever seen. Because if we can make your audio and video files smaller, we can make cancer smaller. And hunger. And AIDS.
Back in... I don't know, 2010, we used Jenkins. Yes, that Java thingy. It was kind of terrible (like every CI), but it had a "Warnings Plugin". It parsed the log output with regular expressions and presented new warnings and errors in a nice table. You could click on them and it would jump to the source. You could configure your own regular expressions (yes, then you have two problems, I know, but it still worked).
Then I had to switch to GitLab CI. Everyone was gushing how great GitLab CI was compared to Jenkins. I tried to find out: how do I extract warnings and errors from the log - no chance. To this day, I cannot understand how everyone just settled on "Yeah, we just open thousands of lines of log output and scroll until we see the error". Like an animal. So of course, I did what anyone would do: write a little script that parses the logs and generates an HTML artifact. It's still not as good as the Warnings Plugin from Jenkins, but hey, it's something...
I'm sure, eventually someone/AI will figure this out again and everyone will gush how great that new thing is that actually parses the logs and lets you jump directly to the source...
Don't get me wrong: Jenkins was and probably still is horrible. I don't want to go back. However, it had some pretty good features I still miss to this day.
My browser can handle tens of thousands of lines of logs, and has Ctrl-F that's useful for 99% of the searches I need. A better runner could just dump the logs and let the user take care of them.
Why most web development devolved into a React-like "you can't search for what you can't see" is a mystery.
What actually triggered me most was the apparently big salary bump you can get for "golang" in the UK? That makes no sense, and I'm guessing this is due to small sample sizes.
It's the median salary: 50% of people earn more than 62.4k. 10% earn more than 80k. It's still low compared to the US, but what isn't?
For this, you get proper health and unemployment insurance, usually 30 days of paid vacation, up to 6 weeks of sick leave with full salary, up to 10 days to take care of sick children with full salary, paternal leave, the right to work part-time if desired, and so on. I don't know where you get the "people can be let go anytime" have from, because Germany is pretty famous for its "Kündigungsschutz" and it's very hard to let people go because of performance issues alone, which is why things like stack ranking and performance improvement plans pretty much do not exist here.
I can understand if young people without kids do not care about these things and just want the money. However, once you get older, you'll see the advantages.
I agree with you partly. The benefits are great & fairly above international norms. But I do not agree with the "firing protection" anymore. Last year alone I saw thousands let go in Berlin in fairly large organizations like neobanks for example. I myself saw my previous employer let go of 30% of the staff over the year. A simple Google search of - "Berlin IT firings 2025" will give you a picture.
"The union" should be "a union", of which, companies are rarely a part of ie zero.
Their workforce may be a member of a union, some equals in grade may belong to different unions.
This is just half of what Time Machine does. What people are constantly missing is that Apple Time Machine is fast, as it does not need to walk through the whole filesystem to find changed files. Thanks to FSEvents, introduced in Mac OS X Leopard, it knows which directories actually contain changed files and hence usually only needs to check a small fraction of the filesystem. (Not sure if it still works that way after the switch to APFS).
This would of course also be possible on Linux (using *notify), and there are some projects which try to do this, but it's really hard to do it reliably. You might argue that this feature is less important nowadays because NVME SSDs are so fast, but still, I remember very well how astonished I was that creating a new time machine snapshot on OS X Leopard took mere seconds.
I lost all my files to Time Machine in 2008. I don't remember exactly what happened. But since then I'll take a slightly slower, observable command-line copy over sparkly magic.
Yes, I do not trust TM. That's why I have both a backup with TM for convenience and also to have all the files (including system files), and a mirror of the important files (basically my home directory) with `rsync`.
No, the right way to do this on Linux and FreeBSD is to use zfs with zfs send/receive. Creating snapshots and sending them is efficient enough to use it as the underlying storage for moderately loaded databases and VMs.
They are atomic and require zero downtime. They can be encrypted and resent to other machines. Cloning whole machines from them is easy and efficient.
Well, not really because it’s portable, as despite being a “POSIX script”, most of the date and sed tricks I do don’t work on the BSD versions of those commands, with comrak, additionally, being a dependency.
Out of interest: which FOSS projects are you maintaining, and how many users do these have, approximately?
reply