It does really help, in modern languages where they provide tools in the box and the ecosystem just accepts those as the default† tools, to have the default be that when you make a new project it just works, often by having it print "Hello, World!" or something else simple but definitive as proof we made a program.
† Default means just that, neither Rust's own compiler nor the Linux kernel need the cargo tooling, but these projects both have actual toolsmiths to maintain their build infrastructure and your toy program does not. There should be a default which Just Works at this small scale.
There's a weird cognitive bias where somehow people justify "I compiled this Hello World C++ project " as "C++ is easy" and yet "I wasn't able to understand how this optimized linear algebra library works" gets classed as "Rust is hard".
In reality it matters what you already know, and whether you want to understand deeply or are just interested in enough surface understanding to write software. There's a reason C++ has an entire book about its many, many types of initialization for example.
If you write correct Rust code it'll work, the borrowck is just that, a check, if the teacher doesn't check your homework where you wrote that 10 + 5 = 15 it's still correct. If you write incorrect code where you break Rust's borrowing rules it'll have unbounded Undefined Behaviour, unlike the actual Rust where that'd be an error this thing will just give you broken garbage, exactly like a C++ compiler.
Evidently millions of people want broken garbage, Herb Sutter even wrote a piece celebrating how many more C++ programmers and projects there were last year, churning out yet more broken garbage, it's a metaphor for 2025 I guess.
KDE is a great desktop environment , but it's also notorious for being a buggy and unpolished DE [1]. It's good your experience wasn't like that, but it's certainly not how the software is generally perceived.
[1]: Of course, different versions have different levels of stability. Also, some of these bugs and problems wouldn't be prevented by using an alternative language such as Rust.
Well FWIW, the original poster's anti-C++ statements aside, removing the borrow checker does nothing except allow you to write thread-unsafe (or race condition-unsafe) code. Therefore, the only change this really makes is allowing you to write slightly more ergonomic code that could well break somewhere at some point in time unexpectedly.
Nope. Anything which wouldn't pass the borrowck is actually nonsense. This fantasy that magically it will just lose thread safety or have race conditions is just that, a fantasy.
The optimiser knows that Rust's mutable references have no aliases, so it needn't safeguard mutation, but without borrow checking this optimisation is incorrect and arbitrary undefined behaviour results.
People hate C because it's hard, people hate C++ because it truly is rubbish. Rubbish that deserved to be tried but that we've now learned was a mistake and should move on from.
I’m sure some people could tiptoe through minefields daily for years, until they fail. Nobody is perfect at real or metaphorical minefields, and hubris is probably the only reason to scoff at people suggesting alternatives.
Of course. My sense is there are a lot fewer in of out-of-bounds accesses and use after frees. Maybe a world-class programmer can go several decades without writing a memory error in C/C++, but they will probably eventually falter, meanwhile the other 99.9% of programmers fail more often. Why would you decline a compiler’s help eliminating certain types of bugs almost entirely?
herb sutter and the c++ community as a whole have put a lot of energy into improving the language and reducing UB; this has been a primary focus of C++26. they are not encouraging people to “churn out more broken garbage”, they are encouraging people to write better code in the language they have spent years developing libraries and expertise in.
Yes, many or even most domains where C++ sees a large market share are domains with no other serious alternative. But this is an indictment of C++ and not praise. What it tells us is that when there are other viable options, C++ is rarely chosen.
The number of such domains has gone down over time, and will probably continue to do so.
The number of domains where low-level languages are required, and that includes C, C++, Rust, and Zig, has gone down over time and continues to do so. All of these languages are rarely chosen when there are viable alternatives (and I say "rarely" taking into account total number of lines of code, not necessarily number of projects). Nevertheless, there are still some very important domains where such languages are needed, and Rust's adoption rate is low enough to suggest serious problems with it, too. When language X offers significant advantages over language Y, its adoption compared to Y is usually quite fast (which is why most languages get close to their peak adoption relatively quickly, i.e. within about a decade).
If we ignore external factors like experience and ecosystem size, Rust is a better language than C++, but not better enough to justify faster adoption, which is exactly what we're seeing. It's certainly gained some sort of foothold, but as it's already quite old, it's doubtful it will ever be as popular as C++ is now, let alone in its heydey. To get there, Rust's market share will need to grow by about a factor of 10 compared to what it is now, and while that's possible, if it does that it will have been the first language to ever do so at such an advanced age.
There's always resistance to change. It's a constant, and as our industry itself ages it gets a bit worse. If you use libc++ did you know your sort didn't have O(n log n) worst case performance until part way through the Biden administration? A suitable sorting algorithm was invented back in 1997, those big-O bounds were finally mandated for C++ in 2011, but it still took until a few years ago to actually implement it for Clang.
Except, as you say, all those factors always exist, so we can compare things against each other. No language to date has grown its market share by a factor of ten at such an advanced age [1]. Despite all the hurdles, successful languages have succeeded faster. Of course, it's possible that Rust will somehow manage to grow a lot, yet significantly slower than all other languages, but there's no reason to expect that as the likely outcome. Yes, it certainly has significant adoption, but that adoption is significantly lower than all languages that ended up where C++ is or higher.
[1]: In a competitive field, with selection pressure, the speed at which technologies spread is related to their relative advantage, and while slow growth is possible, it's rare because competitive alternatives tend to come up.
This sounds like you're just repeating the same claim again. It reminds me a little bit of https://xkcd.com/1122/
We get it, if you squint hard at the numbers you can imagine you're seeing a pattern, and if you're wrong well, just squint harder and a new pattern emerges, it's fool proof.
Observing a pattern with a causal explanation - in an environment with selective pressure things spread at a rate proportional to their relative competitive advantage (or relative "fitness") - is nothing at all like retroactively finding arbitrary and unexplained correlations. It's more along the lines of "no candidate has won the US presidential election with an approval of under 30% a month before the election". Of course, even that could still happen, but the causal relationship is clear enough so even though a candidate with 30% in the polls a month before the election could win, you'd hardly say that's the safer bet.
You're basically just re-stating my point. You mistakenly believe the pattern you've seen is predictive and so you've invented an explanation for why that pattern reflects some underlying truth, and that's what pundits do for these presidential patterns too. You can already watch Harry Enten on TV explaining that out-of-cycle races could somehow be predictive for 2026. Are they? Not really but eh, there's 24 hours per day to fill and people would like some of it not to be about Trump causing havoc for no good reason.
Notice that your pattern offers zero examples and yet has multiple entirely arbitrary requirements, much like one of those "No President has been re-elected with double digit unemployment" predictions. Why double digits? It is arbitrary, and likewise for your "about a decade" prediction, your explanation doesn't somehow justify ten years rather than five or twenty.
> You mistakenly believe the pattern you've seen is predictive
Why mistakenly? I think you're confusing the possibility of breaking a causal trend with the likelihood of doing that. Something is predictive even if it doesn't have a 100% success rate. It just needs to have a higher chance than other predictions. I'm not claiming Rust has a zero chance of achieving C++'s (diminished) popularity, just that it has a less than 50% chance. Not that it can't happen, just that it's not looking like the best bet given available information.
> Notice that your pattern offers zero examples
The "pattern" includes all examples. Name one programming language in the history of software that's grown its market share by a factor of ten after the age of 10-13. Rust is now older than Java was when JDK 6 came out and almost the same age Python was when Python 3 came out (and Python is the most notable example of a late bloomer that we have). Its design began when Java was younger than Rust is now. Look at how Fortran, C, C++, and Go were doing at that age. What you need to explain isn't why it's possible for Rust to achieve the same popularity as C++, but why it is more likely than not that its trend will be different from that of any other programming language in history.
> Why double digits? It is arbitrary, and likewise for your "about a decade" prediction
The precise number is arbitrary, but the rule is that the rate of adoption of any technology (or anything in a field with selective pressure) spreads at a rate proportional to its competitive advantage. You can ignore the numbers altogether, but the general rule about the rate of adoption of a technology or any ability that offers a competitive advantage in a competitive environment remains. The rate of Rust's adoption is lower than that of Fortran, Cobol, C, C++, VB, Java, Python, Ruby, C#, PHP, and Go and is more-or-less similar to that of Ada. You don't need numbers, just comparisons. Are the causal theory and historical precedent 100% accurate for any future technology? Probably not, as we're talking statistics, but at this point, it is the bet that this is the most likely outcome that a particular technology would buck the trend that needs justification.
I certainly accept that the possibility of Rust achieving the same popularity that C++ has today exists, but I'm looking for the justification that that is the most likely outcome. Yes, some places are adopting Rust, but the number of those saying nah (among C++ shops) is higher than that of all programming languages that have ever become very popular. The point isn't that bucking a trend with a causal explanation is impossible. Of course it's possible. The question is whether it is more or less likely than not breaking the causal trend.
even when there are alternatives, sometimes it makes sense to use a library like Qt in its native language with its native documentation rather than a binding - if you can do so safely
Did I write that I hated somebody? I don't think I wrote anything of the sort. I can't say my thoughts about Bjarne for example rise to hatred, nobody should have humoured him in the 1980s, but we're not talking about what happened when rich idiots humoured The Donald or something as serious as that - nobody died, we just got a lot of software written in a crap programming language, I've had worse Thursdays.
And although of course things could have been better they could also have been worse. C++ drinks too much OO kool aid, but hey it introduced lots of people to generic programming which is good.
Correct me if I'm wrong, but I don't think you think that C++ programmers actually want to write "broken garbage", so when you say "millions of people want broken garbage" the implication is that a) they do write broken garbage, b) they're so stupid don't even know that is what they are doing. I can't really read else than in the same vein as an apartheid-era white South-African statement starting "all blacks ...", i.e., an insult to a large class of people simply for their membership in that class. Maybe that's not your intent, but that's how it reads to me, sorry.
I can't help how you feel about it, but what I see is people who supposedly "don't want" something to happen and yet take little or no concrete action to prevent it. When it comes to their memory safety problem WG21 talks about how they want to address the problem but won't take appropriate steps. Years of conference talks about safety, and C++ 26 is going to... encourage tool vendors to diagnose some common mistakes. Safe C++ was rejected, and indeed Herb had WG21 write a new "standing rule" which imagines into existence principles for the language that in effect forbid any such change.
Think Republican Senators offering thoughts and prayers after a school shooting, rather than Apartheid era white South Africans.
Are you seriously comparing discrimination based on factors noone can control to a group literally defined by a choice they made? And you think that's a good faith argument?
Considering how many people will defend C++ compilers bending over backwards to exploit some accidental undefined behaviour with "but it's fast though" then yeah, that's not an inaccurate assessment.
Rust isn't a one true language, no one necessarily needs to learn it, and I'm sure your preffered language is excellent. C and C++ are critical languages with legitimate advantages and use cases. Don't learn Rust of you aren't interested.
But Rust, its community, and language flame wars are separate concerns. When I talk shop with other Rust people, we talk about our projects, not about hating C++.
So don't use it. Rust is not intended to be used by everyone. If you are happy using your current set of tools and find yourself productive with them then by all means be happy with it.
You’re expressing the same attitude here, just in reverse. Some users not thinking highly of C++ doesn’t make Rust a worse or less interesting language.
>Haskell (and OCaml etc) give you both straightjackets..
Haskell's thing with purity and IO does not feel like that. In fact Haskell does it right (IO type is reflected in type). And rust messed it up ("safety" does not show up in types).
You want a global mutable thing in Haskell? just use something like an `IORef` and that is it. It does not involve any complicated type magic. But mutations to it will only happen in IO, and thus will be reflected in types. That is how you do it. That is how it does not feel like a straight jacket.
Haskell as a language is tiny. But Rust is really huge, with endless behavior and expectation to keep in mind, for some some idea of safety that only matter for a small fraction of the programs.
And that I why I find that comment very funny. Always using rust is like always wearing something that constrains you greatly for some idea of "safety" even when it does not really matter. That is insane..
It does in rust. An `unsafe fn()` is a different type than a (implicitly safe by the lack of keyword) `fn()`.
The difference is that unsafe fn's can be encapsulated in safe wrappers, where as IO functions sort of fundamentally can't be encapsulated in non-IO wrappers. This makes the IO tagged type signatures viral throughout your program (and as a result annoying), while the safety tagged type signatures are things you only have to think about if you're touching the non-encapsulated unsafe code yourself.
>The difference is that unsafe fn's can be encapsulated in safe wrappers
This is the koolaid I am not willing to drink.
If you can add safety very carefully on top of unsafe stuff (without any help from compiler), why not just use `c` and add safety by just being very careful?
> IO tagged type signatures viral throughout your program (and as a result annoying)..
Well, that is what good type systems do. Carry information about the types "virally".
Anything short is a flawed system.
> If you can add safety very carefully on top of unsafe stuff (without any help from compiler), why not just use `c` and add safety by just being very careful?
Y'know people complain a lot about Rust zealots and how they come into discussions and irrationally talk about how Rust's safety is our lord and savior and can eliminate all bugs or whatever...
But your take (and every one like it) is one of the weakest I've heard as a retort.
At the end of the day "adding safety very carefully atop of unsafe stuff" is the entire point of abstractions in software. We're just flipping bits at the end of the day. Abstractions must do unsafe things in order to expose safe wrappers. In fact that's literally the whole point of abstractions in the first place: They allow you to solve one problem at a time, so you can ignore details when solving higher level problems.
"Hiding a raw pointer behind safe array-like semantics" is the whole point of a vector, for instance. You literally can't implement one without being able to do unsafe pointer dereferencing somewhere. What would satisfy your requirement for not doing unsafe stuff in the implementation? Even if you built a vector into the compiler, it's still ultimately emitting "unsafe" code in order to implement the safe boundary.
If you want user-defined types that expose things with safe interfaces, they have to be implemented somehow.
As for why this is qualitatively different from "why not just use c", it's because unsafety is something you have to opt into in rust, and isn't something you can just do by accident. I've been developing in rust every day at $dayjob for ~2 years now and I've never needed to type the unsafe keyword outside of a toy project I made that FFI'd to GTK APIs. I've never "accidentally" done something unsafe (using Rust's definition of it.)
It's an enormous difference to something like C, where simply copying a string is so rife with danger you have a dozen different strcpy-like functions each of which have their own footguns and have caused countless overflow bugs: https://man.archlinux.org/man/string_copying.7.en
1. In `c` one have to remember a few, fairly intutive things, and enforce them without fail.
2. In rust, one have to learn, remember ever increasing number of things and constantly deal with non-intutive borrow-checker shenanigans that can hit your project at any point of the development forcing you to re-architecture your project, despite doing everything to ensure "safety". But the borrow-checker can't be convinced.
I have had enough of 2. I might use rust if I want to build a critical system with careless programmers, but who would do such a thing? For open source dependencies, one will have to go by community vouching or auditing themselves. Can't count something to be "Safe" just because it is in rust, right? So what is the point. I just don't see it. I mean, if you look a bit deeper, It just does not make any sense.
What is the point. If I share something, someone is going to come along and say. That is not how you are "supposed" to do it in rust.
And that is exactly my point. You need to learn a zillion rust specific patterns for doing every little thing to work around the borrow-checker and would be kind of unable to come up with your own designs with trade-offs that you choose.
And that becomes very mechanical and hence boring. I get that it would be safe.
So yes, if I am doing brain surgery, I would use tools that prevent me from making quick arbitrary movements. But for everything else a glove would do.
To learn something is generally the point. Either me, or you. I’ve been developing in rust for half a decade now and genuinely do not know what you were talking about here. I haven’t experienced it.
So either there are pain points that I’m not familiar with (which I’m totally open to), or you might be mistaken about how rust works. Either way, one or both of us might learn something today.
All lessons are not equally valuable. Seemingly arbitrary reasoning for some borrow checker behavior is not interesting enough for me to learn.
In the past, I would come across something and would lookup and the reasoning for it often would be "What if another thread do blah blah balh", but my program is single threaded.
Borrow checker issues do not require multiple threads or async execution to be realized. For example, a common error in C++ is to take a reference/interator into vector, then append/push onto the end of that vector, then access the original error. If that causes reallocation, the reference is no longer valid and this is UB. Rust catches this because append requires a mutable reference, and the borrow checker ensures there are no other outstanding references (read only or mutable) before taking the &mut self reference for appending.
This is generally my experience with Rust: write something the way I would in C++, get frustrated at borrow checker errors, then look into it and learn my C++ code has hidden bugs all these years, and appreciate the rust compiler’s complaints.
>If that causes reallocation, the reference is no longer valid
Doesn't the append/push function return a pointer in that case? At least in `c` there are special functions that reallocate and is not done by implicitly (but I understand someone could write a function that does it).
Thus it appears that borrow checker's behavior is guided by bad designs in other languages. When bad design is patched with more design, the latter often becomes non-intuitive and restricting. That seems to have happened with the rust's borrow checker.
In C++? No. The vector container is auto resizing. When it hits capacity limits it doubles the size of the allocation and copies the contents to the new memory. An insertion operation will give you an iterator reference to the newly inserted value, but all existing references may or may not remain valid after the call.
This meant “guided by bad design.” The borrow checker wasn’t written to handle this one use case. It was designed to make all such errors categorically impossible.
> If you can add safety very carefully on top of unsafe stuff (without any help from compiler), why not just use `c` and add safety by just being very careful?
There is help from the compiler - the compiler lets the safe code expose an interface that creates strict requirements about how it is being called with and interacted with. The C language isn't expressive enough to define the same safe interface and have the compiler check it.
You can absolutely write the unsafe part in C. Rust is as good at encapsulating C into a safe rust interface as it is at encapsulating unsafe-rust into a safe rust interface. Just about every non-embedded rust program depends on C code encapsulated in this manner.
> Well, that is what good type systems do. Carry information about the types "virally". Anything short is a flawed system.
Good type systems describe the interface, not every implementation detail. Virality is the consequence of implementation details showing up in the interface.
Good type systems minimize the amount of work needed to use them.
IO is arguably part of the interface, but without further description of what IO it's a pretty useless detail of the interface. Meanwhile exposing a viral detail like this as part of the type system results in lots of work. It's a tradeoff that I think is generally not worth it.
>the compiler lets the safe code expose an interface that creates strict requirements about how it is being called with and interacted with..
The compiler does not and cannot check if these strict requirements are enough for the intended "safety". Right? It is the judgement of the programmer.
And what is stopping a `c` function with such requirements to be wrapped in some code that actually checks these requirements are met? The only thing that the rust compiler enables is to include a feature to mark a specific function as unsafe.
In both cases there is zero help from the compiler to actually verify that the checks that are done on top are sufficient.
And if you want to mark a `c` function as unsafe, just follow some naming convention...
>but without further description of what IO it's a pretty useless detail of the interface..
Take a look at effect-system libraries which can actually encode "What IO" at the type level and make it available everywhere. It is a pretty basic and widely used thing.
> The compiler does not and cannot check if these strict requirements are enough for the intended "safety". Right? It is the judgement of the programmer.
Yes*. It's up to the programmer to check that the safe abstraction they create around unsafe code guarantees all the requirements the unsafe code needs are upheld. The point is that that's done once, and then all the safe code using that safe abstraction can't possibly fail to meet those requirements - or in other words any safety related bug is always in the relatively small amount of code that uses unsafe and builds those safe abstraction.
> And what is stopping a `c` function with such requirements to be wrapped in some code that [doesn't] actually checks these requirements are met?
Assuming my edit to your comment is correct - nothing. It's merely the case that any such bug would be in the small amount of clearly labelled (with the unsafe keyword) binding code instead of "anywhere".
> The only thing that the rust compiler enables is to include a feature to mark a specific function as unsafe.
No, the rust compiler has a lot more features than just a way to mark specific functions as unsafe. The borrow checker, and it's associated lifetime constraints, enforcing that variables that are moved out of (and aren't `Copy`) aren't used, is one obvious example.
Another example is marking how data can be used across threads with traits like `Send` and `Sync`. Another - when compared to C anyways - is simply having a visibility system so that you can create structs with fields that aren't directly accessible via other code (so you can control every single function that directly accesses them and maintain invariants in those functions).
> In both cases there is zero help from the compiler to actually verify that the checks that are done on top are sufficient.
Yes and no, "unsafe" in rust is synonymous with "the compiler isn't able to verify this for you". Typically rust docs do a pretty good job of enumerating exactly what the programmer must verify. There are tools that try to help the programmer do this, from simple things like being able to enable a lint that checks every time you wrote unsafe you left a comment saying why it's ok, and that you actually wrote something the compiler couldn't verify in the first place. To complex things like having a (very slow) interpreter that carefully checks that in at least one specific execution every required invariant is maintained (with the exception of some FFI stuff that it fails on as it is unable to see across language boundaries sufficiently well).
The rust ecosystem is very interested in tools that make it easier to write correct unsafe code. It's just rather fundamentally a hard problem.
* Technically there are very experimental proof systems that can check some cases these days. But I wouldn't say they are ready for prime time use yet.
> You want a global mutable thing in Haskell? just use something like an `IORef` and that is it. It does not involve any complicated type magic. But mutations to it will only happen in IO, and thus will be reflected in types. That is how you do it. That is how it does not feel like a straight jacket.
Haskell supports linear types now. They are pretty close in spirit to Rust's borrowing rules.
> Haskell as a language is tiny.
Not at all. Though much of what Haskell does can be hand-waved as sugar on top of a smaller core.
I think that is because when you start learning Haskell, you are not typically told about state monads, `IORefs` and likes that enables safe mutability.
It might be because Monads could have a tad bit advanced type machinery. But IORefs are straightforward, but typically one does not come across it until a bit too late into their Haskell journey.
How are we still having the same trade off discussion being argued so black and white when reality has shown that both options are preferred by different groups.
Rust says that all incorrect programs (in terms of memory safety) are invalid but the trade is that some correct programs will also be marked as invalid because the compiler can't prove them correct.
C++ says that all correct programs are valid but the trade is that some incorrect programs are also valid.
You see the same trade being made with various type systems and people still debate about it but ultimately accept that they're both valid and not garbage.
>C++ says that all correct programs are valid but the trade is that some incorrect programs are also valid.
C++ does not say this, in fact no statically typed programming language says this, they all reject programs that could in principle be correct but get rejected because of some property of the type system.
You are trying to present a false dichotomy that simply does not exist and ignoring the many nuances and trade-offs that exist among these (and other) languages.
Nope. C++ really does deliberately require that compilers will in some cases emit a program which does... something even though what you wrote isn't a C++ program.
Yes, that's very stupid, but they did it with eyes open, it's not a mistake. In the C++ ISO document the words you're looking are roughly (exact phrasing varies from one clause to another) Ill-formed No Diagnostic Required (abbreviated as IFNDR).
What this means is that these programs are Ill-formed (not C++ programs) but they compile anyway (No diagnostic is required - a diagnostic would be an error or warning).
Why do this? Well because of Rice's Theorem. They want a lot of tricky semantic requirements for their language but Rice showed (back in like 1950) that all the non-trivial semantic requirements are Undecidable. So it's impossible for the compiler to correctly diagnose these for all cases. Now, you could (and Rust does) choose to say if we're not sure we'll reject the program. But C++ chose the exact opposite path.
No one disputes that C++ accepts some invalid programs, I never claimed otherwise. I said that C++'s type system will reject some programs that are in principle correct, as opposed to what Spivak originally claimed about C++ accepting all correct programs as valid.
The fact that some people can only think in terms of all or nothing is really saying a lot about the quality of discourse on this topic. There is a huge middle ground here and difficult trade-offs that C++ and Rust make.
I knew I should have also put the (in terms of memory safety) on the C++ paragraph but I held off because I thought it would be obvious both talking about the borrow checker and in contrast to Rust with the borrow checker.
Yes, when it comes to types C++ will reject theoretically sound programs that don't type correctly. And different type system "strengths" tune themselves to how many correct programs they're willing to reject in order to accept fewer incorrect ones.
I don't mean to make it a dichotomy at all, every "checker", linter, static analysis tool—they all seek to invalidate some correct programs which hopefully isn't too much of a burden to the programmer but in trade invalidate a much much larger set of incorrect programs. So full agreement that there's a lot of nuance as well as a lot of opinions when it goes too far or not far enough.
For all its faults, and it has many (though Rust shares most of them), few programming languages have yielded more value than C++. Maybe only C and Java. Calling C++ software "garbage" is a bonkers exaggeration and a wildly distorted view of the state of software.
Why would this matter? The borrowck is (a) not needed during bring-up because as its name suggests it is merely a check, so going without it just means you can write nonsense and then unbounded undefined behaviour results, but (b) written entirely in Rust so you can just compile it with the rest of this "exotic hardware" Rust compiler you've built.
So the thing about "Ya'll should ..." is that it's often mistaking "Group A and Group B hold conflicting beliefs but share a characteristic" with "Group C, all the people with this characteristic, all exhibit an incoherent belief structure".
For example suppose you apply this to the US Senate. So instead of Group A (Democrats and a those who caucus with them) and Group B (Republicans) we instead think there's a single Group C, Senators. Now their behaviour seems incoherent, this Group C seems to hold contradictory opinions and behaves irrationally, why can't they get their act together? The actual answer is that we misunderstood and they're not a single coherent group so that's why they don't act that way.
You might as well argue: "Part of my brain thought A at time Y. A different part of my brain had a different thought B, at a different time Z. Why the accusations of hypocrisy?"
The problem arises when an individual or group tries to represent themselves as more credible/consistent/coherent than they really are.
If you freely admit that you have multiple personality disorder, hypocrisy is to be expected from you as an individual. People know what they're in for.
If you respond to accusations of hypocrisy by saying: "Hm, that's a good point. I'll have to reflect and see if I can reach consistency here." Then people recognize you are making a good-faith effort.
I've observed that modern progressivism represents itself with a strong us/them boundary. The vociferousness of the rhetoric vastly outstrips the quality of the underlying reasoning/decision mechanism. And I've never seen a progressive say: "You make a good point, we'll have to debate on that."
You are correct that individual progressives may, in principle, be credible if they have a coherent philosophy which is consistently applied (including to critique their own "team" when appropriate).
But empirically, modern progressivism is more of a "meme ideology" where precepts are invoked when convenient, against whatever outgroup is currently fashionable. Progressive rhetoric, and progressive reasoning, is so flexible and untethered that if you're sufficiently talented at wielding it, it can be used to reach virtually any conclusion. The selective application of principles at the group level has strong parallels to how hypocrisy works at the level of an individual.
A movement can be meaningfully described as hypocritical, even if its individual members are not.
> I've never seen a progressive say: "You make a good point, we'll have to debate on that."
Humans are bad at that, and the ones who say it often don’t actually mean it. Some people claim their openness to debate, but that’s not the same as being open to changing one’s mind.
There are dozens of cases like this. E.g. if you say white people are too powerful, that's enlightened. If you say Jews are too powerful, that's antisemitic. Double standards are the rule. Consistent, impartial application of principles is a very rare thing with this ideology.
You should prefer to write unreachable!("because ...") to explain to some future maintenance engineer (maybe yourself) why you believed this would never be reached. Since they know it was reached they can compare what you believed against their observed facts and likely make better decisions.
But at least telling people that the programmer believed this could never happen short-circuits their investigation considerably.
For "adding an element to a vector" it's actually not necessarily what you meant here and in some contexts it makes sense to be explicit about whether to allocate.
Rust's Vec::push may grow the Vec, so it has amortized O(1) and might allocate.
However Vec::push_within_capacity never grows the Vec, it is unamortized O(1) and never allocates. If our value wouldn't fit we get the value back instead.
The usual thing for languages is to provide a global allocator. That's what C's malloc is doing for example. We're not asked to specify the allocator, it's provided by the language runtime so it's implied everywhere.
In standalone C, or Rust's no_std, there is no allocator provided, but most people aren't writing bare metal software.
If you need multiple allocators, and you aren't already using lisp, who cares? Just use lisp. Anyone who can use allocators against the C abi can clearly see the benefits of using a language that can cater to the developer. Zig can never do this, yea even with its shallow macros. Zig will always be a shitty B knockoff rather than something artisans actually want to use
It's also in the Handmade crowd, and for a lot of people that's intimately connected to video games. I actually think Handmade's approach is helpful for games in a way it isn't for a lot of other software.
Games are art. The key thing is that you have to actually make it. Handmade encourages people who might make some art to actually make something rather than being daunted by the skill needed for a very sophisticated technology. Handmade is like telling a would-be photographer "You already have a camera on your phone. Point it at things and take pictures of them" rather than "Choose your subject, then you will need to purchase either an SLR or maybe a larger camera, and suitable lenses and a subscription for Photoshop and then take a college course in photo composition"
I don't want to use a text editor made by someone who has no idea what they're doing and learned about rope types last week. A dozen handmade text editors, most not as good as pico, are of no value to anybody.
But I do want to play video games by people who have no idea what they're doing. That's what Blue Prince is, for example. A dozen handmade video games means a dozen chances for an entirely unprecedented new game. It'll be rough around the edges, but novelty is worth a lot.
I think Hoare is bang on because we know the only similar values in many languages are also problematic even though they're not related to memory.
The NaNs are, as their name indicates, not numbers. So the fact this 32-bit floating point value parameter might be NaN, which isn't even a number, is as unhelpful as finding that the Goose you were passed as a parameter is null (ie not actually a Goose at all)
There's a good chance you've run into at least one bug where oops, that's NaN and now the NaN has spread and everything is ruined.
The IEEE NaNs are baked into the hardware everybody uses, so we'll find it harder to break away from this situation than for the Billion Dollar Mistake, but it's clearly not a coincidence that this type problem occurs for other types, so I'd say Hoare was right on the money and that we're finally moving in the correct direction.
What I'm saying is, I disagree that "we know" these things. We know that there are bugs that can be statically prevented by having non-nullable types to enforce contracts, but that doesn't in itself make null the actual source of the problem.
A language with non-nullability-by-default in its reference types is no worse than a language with no null. I say this because, again, there will always be situations where you may or may not have a value. For example, grabbing the first item in a list; the list may be empty. Even if you "know" the list contains at least one item, the compiler does not. Even if you check the invariant to ensure that it is true, the case where it is false may be too broken to handle and thus crashing really is the only reasonable thing to do. By the time the type system has reached its limits, you're already boned, as it can't statically prevent the problem. It doesn't matter if this is a nullable reference or if its an Option type.
Because of that, we're not really comparing languages that have null vs languages that don't. We're comparing languages that have references that can be non-nullable (or functionally equivalent: references that can't be null, but optional wrapper types) versus languages that have references that are always nullable. "Always nullable" is so plainly obviously worse that it doesn't warrant any further justification, but the question isn't whether or not it's worse, it's how much worse.
Maybe not a billion dollars worse after all.
P.S.: NaN is very much the same. It's easy to assign blame to NaN, and NaN can indeed cause problems that wouldn't have existed without it. However, if we had signalling NaNs by default everywhere, I strongly suspect that we would still curse NaN, possibly even worse. The problem isn't really NaN. It's the thing that makes NaN necessary to begin with. I'm not defending null as in trying to suggest that it isn't involved in causing problems, instead I'm suggesting that the reasons why we still use null are the true root issue. You really do fix some problems by killing null, but the true root issue still exists even after.
† Default means just that, neither Rust's own compiler nor the Linux kernel need the cargo tooling, but these projects both have actual toolsmiths to maintain their build infrastructure and your toy program does not. There should be a default which Just Works at this small scale.
reply