Because C is 50 years old and boring. And Rust is slightly more intellectually demanding and takes longer to get into so has a a smaller monk like comm-- oh my god.
I'd argue that Rust is less demanding intellectually than C as you don't have to constantly worry about UB. C is definitely easier to "get into", if by "getting into" you mean writing unmergeable contributions full of unidiomatic code and security vulnerabilities.
C's age is not an issue in itself. The programming languages it replaced were ahead of C in many ways. It was a setback from a language design point of view, even 50 years ago.
>> C's age is not an issue in itself. The programming languages it replaced were ahead of C in many ways. It was a setback from a language design point of view, even 50 years ago.
C takes a different approach to how it handles problems that was described well in "The Rise of Worse is Better":
"Two famous people, one from MIT and another from Berkeley (but working on Unix) once met to discuss operating system issues. The person from MIT was knowledgeable about ITS (the MIT AI Lab operating system) and had been reading the Unix sources. He was interested in how Unix solved the PC loser-ing problem. The PC loser-ing problem occurs when a user program invokes a system routine to perform a lengthy operation that might have significant state, such as IO buffers. If an interrupt occurs during the operation, the state of the user program must be saved. Because the invocation of the system routine is usually a single instruction, the PC of the user program does not adequately capture the state of the process. The system routine must either back out or press forward. The right thing is to back out and restore the user program PC to the instruction that invoked the system routine so that resumption of the user program after the interrupt, for example, re-enters the system routine. It is called PC loser-ing because the PC is being coerced into loser mode, where loser is the affectionate name for user at MIT.
The MIT guy did not see any code that handled this case and asked the New Jersey guy how the problem was handled. The New Jersey guy said that the Unix folks were aware of the problem, but the solution was for the system routine to always finish, but sometimes an error code would be returned that signaled that the system routine had failed to complete its action. A correct user program, then, had to check the error code to determine whether to simply try the system routine again. The MIT guy did not like this solution because it was not the right thing.
The New Jersey guy said that the Unix solution was right because the design philosophy of Unix was simplicity and that the right thing was too complex. Besides, programmers could easily insert this extra test and loop. The MIT guy pointed out that the implementation was simple but the interface to the functionality was complex. The New Jersey guy said that the right tradeoff has been selected in Unix -- namely, implementation simplicity was more important than interface simplicity.
The MIT guy then muttered that sometimes it takes a tough man to make a tender chicken, but the New Jersey guy didn’t understand (I’m not sure I do either).
Now I want to argue that worse-is-better is better. C is a programming language designed for writing Unix, and it was designed using the New Jersey approach. C is therefore a language for which it is easy to write a decent compiler, and it requires the programmer to write text that is easy for the compiler to interpret. Some have called C a fancy assembly language. Both early Unix and C compilers had simple structures, are easy to port, require few machine resources to run, and provide about 50%-80% of what you want from an operating system and programming language.
Half the computers that exist at any point are worse than median (smaller or slower). Unix and C work fine on them. The worse-is-better philosophy means that implementation simplicity has highest priority, which means Unix and C are easy to port on such machines. Therefore, one expects that if the 50% functionality Unix and C support is satisfactory, they will start to appear everywhere. And they have, haven’t they?
I thought Rust didn't have a spec so everything in Rust was essentially undefined behaviour.
Has this changed or is the "defined" part still the compiler source code? In that case taking the source code of any C compiler as the _blessed_ one should get rid of any undefined behaviour problems as well.
You don't need a spec for the concepts of undefined vs defined behaviour. LLVM IR lacks a spec as well and is still built upon these concepts (LLVM IR does have documentation but so does Rust. There is no comprehensive document like the C specification for either).
Indeed the notion of what behaviour is considered undefined changes with compiler versions, and it is not fixed yet. E.g. mem::unused() for example is now basically always undefined and you are supposed to use MaybeUninit. But you get a warning if you try to use the old API.
This is for unsafe Rust however. With safe Rust, even though there is no spec, the guarantee is that, unless you hit one of the soundness holes in the language, or a piece of user code that uses unsafe internally, you are safe.