Someone hands you a waffle head hammer seems like a more apt comparison.
If you can get the hang of it, it’s a safer hammer. You’re still going to be manually managing your memory allocations though. The nailgun people have garbage collection (and yes for the gc folks, I’m aware you can still write allocation free code, or at least take manual control of allocations in a garbage collected language and therefore opt out of garbage collection overhead - but why would you unless there was a performance issue)
I don't manually manage my memory anymore in Rust than I do in garbage collected languages (say Python or Go). (I do have to think about object lifetimes and whats pointing where in all three mentioned languages).
As an aside, I really like not having to manually free locks, close files, release connections back to pools, etc. It's one resource management paradigm that applies to everything - not just memory.
Don’t get me wrong, I really like rust, but you can’t deny that in Java or python you can just put/share this class/object here that changes the lifecycle of a bunch of other components and it will Just WorkTM thanks to the GC. While in Rust such a refactor will be safe in the end, but you will have to manually look into a bunch of uses of said struct(s) and recursively refactor the whole thing while fighting (or being helped) by the compiler.
This is the (in my opinion) insurmountable difference between low and high-level languages. Even though rust is (arguably) probably the best/most readable low-level language that as you note can sometime even beat managed languages (locking/file close), it will loose to major refactor-speed.
>You’re still going to be manually managing your memory allocations though.
I'm writing a 10,000 LoC library and I call drop() manually only 3 times, in exactly one place, and not for memory reasons. I'm not sure what you mean. If you mean thinking about lifetimes, sure. If you mean thinking about drop(), almost never.
I’d guess that not many people are using rust for work compared to how many use it for fun. If that’s true then most people using rust are doing so because they enjoy it, not because they’re forced to. So most people with an active interest in the language are the ones who enjoy it. It’s similar for other niche languages which haven’t made it into many workplaces. Compare that to the mainstream languages which a lot of us have to use whether we like them or not, and so we have opinions about them which are not necessarily positive.
Because C is 50 years old and boring. And Rust is slightly more intellectually demanding and takes longer to get into so has a a smaller monk like comm-- oh my god.
I'd argue that Rust is less demanding intellectually than C as you don't have to constantly worry about UB. C is definitely easier to "get into", if by "getting into" you mean writing unmergeable contributions full of unidiomatic code and security vulnerabilities.
C's age is not an issue in itself. The programming languages it replaced were ahead of C in many ways. It was a setback from a language design point of view, even 50 years ago.
>> C's age is not an issue in itself. The programming languages it replaced were ahead of C in many ways. It was a setback from a language design point of view, even 50 years ago.
C takes a different approach to how it handles problems that was described well in "The Rise of Worse is Better":
"Two famous people, one from MIT and another from Berkeley (but working on Unix) once met to discuss operating system issues. The person from MIT was knowledgeable about ITS (the MIT AI Lab operating system) and had been reading the Unix sources. He was interested in how Unix solved the PC loser-ing problem. The PC loser-ing problem occurs when a user program invokes a system routine to perform a lengthy operation that might have significant state, such as IO buffers. If an interrupt occurs during the operation, the state of the user program must be saved. Because the invocation of the system routine is usually a single instruction, the PC of the user program does not adequately capture the state of the process. The system routine must either back out or press forward. The right thing is to back out and restore the user program PC to the instruction that invoked the system routine so that resumption of the user program after the interrupt, for example, re-enters the system routine. It is called PC loser-ing because the PC is being coerced into loser mode, where loser is the affectionate name for user at MIT.
The MIT guy did not see any code that handled this case and asked the New Jersey guy how the problem was handled. The New Jersey guy said that the Unix folks were aware of the problem, but the solution was for the system routine to always finish, but sometimes an error code would be returned that signaled that the system routine had failed to complete its action. A correct user program, then, had to check the error code to determine whether to simply try the system routine again. The MIT guy did not like this solution because it was not the right thing.
The New Jersey guy said that the Unix solution was right because the design philosophy of Unix was simplicity and that the right thing was too complex. Besides, programmers could easily insert this extra test and loop. The MIT guy pointed out that the implementation was simple but the interface to the functionality was complex. The New Jersey guy said that the right tradeoff has been selected in Unix -- namely, implementation simplicity was more important than interface simplicity.
The MIT guy then muttered that sometimes it takes a tough man to make a tender chicken, but the New Jersey guy didn’t understand (I’m not sure I do either).
Now I want to argue that worse-is-better is better. C is a programming language designed for writing Unix, and it was designed using the New Jersey approach. C is therefore a language for which it is easy to write a decent compiler, and it requires the programmer to write text that is easy for the compiler to interpret. Some have called C a fancy assembly language. Both early Unix and C compilers had simple structures, are easy to port, require few machine resources to run, and provide about 50%-80% of what you want from an operating system and programming language.
Half the computers that exist at any point are worse than median (smaller or slower). Unix and C work fine on them. The worse-is-better philosophy means that implementation simplicity has highest priority, which means Unix and C are easy to port on such machines. Therefore, one expects that if the 50% functionality Unix and C support is satisfactory, they will start to appear everywhere. And they have, haven’t they?
I thought Rust didn't have a spec so everything in Rust was essentially undefined behaviour.
Has this changed or is the "defined" part still the compiler source code? In that case taking the source code of any C compiler as the _blessed_ one should get rid of any undefined behaviour problems as well.
You don't need a spec for the concepts of undefined vs defined behaviour. LLVM IR lacks a spec as well and is still built upon these concepts (LLVM IR does have documentation but so does Rust. There is no comprehensive document like the C specification for either).
Indeed the notion of what behaviour is considered undefined changes with compiler versions, and it is not fixed yet. E.g. mem::unused() for example is now basically always undefined and you are supposed to use MaybeUninit. But you get a warning if you try to use the old API.
This is for unsafe Rust however. With safe Rust, even though there is no spec, the guarantee is that, unless you hit one of the soundness holes in the language, or a piece of user code that uses unsafe internally, you are safe.
And yet hardly a day goes by without a rust story on hn and for the past 5 years it’s scored most loved language on the annual stack overflow developer survey.
That said, Rust has an exit from cult status and a starting path toward mainstream in its sights. If Rust makes it into the kernel then that is the beginning of Rust.
Wasn't there an article on here the other day talking about how Rust will never make it into the kernel as-is? The sentiment seemed to be that the kernel developers simply reject the idea of packaged code and many of the modern paradigms in Rust. I only had a chance to skim it but it made it sound like Rust in the kernel would be forced/relegated to a very different usage and style compared to the way it is written elsewhere.
Here's your problem. No, that is not an accurate summary of the discussion. And the whole point of the dialogue between the kernel maintainers and the Rust developers to figure out what needs to be done to make Rust suitable for inclusion in the kernel - which has already resulted to changes in the Rust toolchain and standard library. So
> Rust will never make it into the kernel as-is
Is about the most negative possible way to frame it while being technically true. Nobody is suggesting that Rust be included in the kernel "as-is", they're suggesting that Rust be included in the kernel, and having a dialogue about what would need to be done on both sides to make that possible.
Thanks. I’m not going to delete or edit my original comment, but I will say my question was meant in good faith without the intent to cast shade. I’m really surprised I was downvoted. I know next to nothing about rust beyond a fleeting hour or two I’ve spent with it, but I don’t think the way I phrased my comment was any worse than the off-handed remarks made in that article. Perhaps it’s neither here nor there but I’ve worked a lot lately. I’ve worked more than I care to admit for reasons completely beyond my control. I still try to keep up with stuff like this because I care. I love my work and this industry. I’m doing my best.
Not only that, the rust community also seems to have some weird complex where not only they evangelize their own language (which is fine) but also feel the need to bully and bash on other languages (usually targeting mainly C/Go but occasionally Java as well).
It's like functional programming in that both functional programming and Rust tickle just the right parts of engineers' brains.
I like Rust, I write a lot of it, but I can't help but feel that Rust isn't "it". I think another language will pave the path Rust trail-blazed, at some point.
Personally I think this comment explains why Go isn't the solution. It's magical for fast development following the most minimum and well trodden path. Stray outside and face demons at every turn.
We, like the commenter have made the decision to start slowly depreciating our Go code base in favor of Rust.
People are writing production level databases, distributed systems in Go... ofcourse it's a systems language unless your definition of a systems language is something super narrow.
There’s some places you can’t use a language with a large runtime like Go. For example, operating system kernels, real time embedded systems, game engines, anywhere you want to be extremely careful about memory allocations. You can write go for most of these use cases as toys and demonstrations, but this is where languages like C dominate and Rust/Zig are making inroads because of the ability to execute on bare metal with no runtime.
When I think of systems languages, I think of languages that are suitable for all of those use cases, but also for writing operating systems and code for embedded microcontrollers. Writing a kernel in a language with garbage collection is possible, but would it really be optimal compared to writing a kernel in other languages without it?