Hacker Newsnew | past | comments | ask | show | jobs | submit | withoutboats2's commentslogin

> That single passing reference to him as King of Assyria, was everything that was known of him for 2500 years, until the ruins of Dur-Sharrukin, his capital (destroyed during the mentioned wars) were excavated in the 19th century.

I only have a layman's knowledge of this part of history, but this strikes me as unlikely. Surely something was known of him for some time in other sources, probably hundreds of years at least, before those source were also lost? The loss of knowledge is also second-order: we have lost knowledge of what was known to our predecessors.


Sure, I might have overstated the case a bit. There was scattered used of Akkadian as a written language into Alexander the Great's time; there was possibly some knowledge of that time in Babylon around then.

Still, the end of Mesopotamian records is abrupt. There's a written record from prehistory until the wars of around 700 BC, where the use of writing drops off but doesn't completely stop. But written records in Akkadian, the education of priests and scribes in Akkadian and Sumerian (itself dead for 2000 years by then as a purely liturgical language) basically stops after the fall of Babylon in 539 BC.

I'd note that writing in antiquity was deeply tied to both administration and to religion. The Persians were Zoroastrians; they do not seem to have oppressed, but state support went to Zoroastrianism and not the traditional Mesopotamian cults (and their scribes and schools). They imposed their own administrative structure on the conquered territories. Persian became the written standard, pretty much overnight. Akkadian seems to have ceased to be spoken within a couple generations among the ruling class, and within a few hundred more years it was not only dead but basically forgotten.


> Still, the end of Mesopotamian records is abrupt. There's a written record from prehistory until the wars of around 700 BC, where the use of writing drops off but doesn't completely stop. But written records in Akkadian, the education of priests and scribes in Akkadian and Sumerian (itself dead for 2000 years by then as a purely liturgical language) basically stops after the fall of Babylon in 539 BC.

This probably isn't so much a move away from the use of writing, but a shift away from Akkadian cuneiform on clay tablets to Aramaic on vellum. Clay tablets, even unfired, survive a long time. Vellum doesn't in that climate.


I found these two episodes of the “Fall of civilizations” podcast to be very informative about the Sumerians and Assyrians. It’s episode 8[1] and 13[2], definitely worth a listen in my opinion.

[1]: https://youtu.be/cq1g8czIBJY [2]: https://youtu.be/Qr4FiT0ks7I


I hope you don't. To establish my credentials for this comment - GATs were my idea.

The entire point of GATs was to carve out a design space that solved peoples' problems without being so high-minded as monads and HKT and so on. Unfortunately, a lot of people who like monads like to talk about GATs in the same way. But its really as simple as this: when you add an associated type to a trait, you can make that type generic, the same way you can make a type alias generic when its not in a trait. The whole point of specifying GATs and not a more general "HKT" is that it's an obvious extension of the existing language.

There are lots of useful traits that you can't define if you can't make an associated type generic, so people are excited to have this feature.

It turned out that implementing this in rustc took a very long time, because refactoring the typechecker of rustc to support this was challenging. But that doesn't mean the feature itself is complicated or difficult to use. The whole point of GATs is that it shouldn't really feel like a feature once it's done: naturally, if you are writing a trait with a method that returns `Self::Foo`, but you need it to be `Self::Foo<T>` or `Self::Foo<'a>`, you can just do that, rather than the compiler telling you that it's not supported.


Not trying to make you feel like you have failed, but whenever I encounter language features on this level - aka not comprehensible for a mortal like myself - I do tend to want to shamefully turn away from this profession entirely. The good news is that I am most likely a total dunce.

I wish I could become reasonable with rust because I do like the concepts that I think I grasp.


You're begging the question when you say that GATs are "not comprehensible for a mortal like myself." My entire point is that they were designed not to be incomprehensible.

What is "not comprehensible for a mortal" in this situation is why adding a generic to an associated type is a long-anticipated feature that took years of development to support. This is related to the "incomprehensible" discussion that occurs around this feature, talking about type functions and higher kindedness and all of these mathematical formalisms. But this discussion is just happening among practitioners and enthusiasts schooled in a certain jargon and area of arcane knowledge. It doesn't mean you need to understand any of this to just use the feature.

It's like people saying the internet is "not comprehensible for a mortal like myself" because most people do not have the background to properly understand IP/TCP/HTTP and how things like this undergirdle the technology that they use. And yet they use the internet just fine. If the design has succeeded, you should not need to even hear words like "higher kindedness" before you can make your associated type generic.

Possibly the system design is imperfect and the abstractions leaks and you as a user have to learn more than one would hope in order to successfully use the tool to accomplish your goals. Rust doesn't have a perfect track record here.


> My entire point is that they were designed not to be incomprehensible.

It's frustrating that the documentation / announcement always seems to go for the most complicated way to explain just about everything, in what I assume is an attempt to ensure it's the most comprehensive explanation, in the smallest number of lines. It's fine to provide different levels of explanation/examples.

Numerous times I read stuff and think "Nope. Absolutely no idea why I'd use that, and I'm not entirely sure I have even half a clue what it's trying to do" until much much later when I see more simplistic practical applications of it and comprehension slowly dawns about what was being explained.


To be super clear I'm not trying to disparage you at all. I am just saying that I might not be cut out for understanding these complexities and ... honestly it deeply frustrates me, but I don't know how to overcome it.


Traits can have associated types.

    trait Foo {
        type Bar;
    }
Until now, those types had to be concrete, as in the above example. You can implement Foo with `Bar = u32` and `Bar = String` and even `Bar = Vec<bool>`.

    trait Foo {
        type Bar<T>;
    }
That `Bar` type above can now be generic over some other type `T` so you can implement it for `Bar<T> = Vec<T>`, `Bar<T> = Result<T>`, `Bar<T> = Option<T>`, and any other generic type. That's it. That's the whole thing. Use it if it's useful to you.

If you've never needed to write a trait that incorporated an associated type (.AT) which needed to be generic (G..), then this won't mean much to you and that's fine. But you might be using libraries that could be somewhat more ergonomic if they were able to use this feature. The good news is that now that it's been released to stable, a future version of that library can be written more ergonomically.


Is that what this is all about? I feel like the examples people give are nowhere near this simple.


Yep.

People are trying to show how you might use it in practice, which is useful. But I think people should start with the above.

If you’ve ever used associated types, this is the sort of thing you would probably expect would be possible even if you never needed to do it. Now it’s possible.


When I find a language feature I don't understand even after reading what's out there, I ignore it. Then I plod along with my own code. Eventually I'll notice I've been writing the same boilerplate code again and again, and I'll finally see the use for the feature. At that point it's less a matter of "understanding" in a grand intellectual sense, and more just fitting my needs to the syntax. After a few such instances, I know the feature well enough to unblock someone else who doesn't get it. That's a good practical test of understanding.

TL;DR: don't worry about it. When you need it, you'll learn it.


Indeed, when I was very first learning to code I did this with arrays! Couldn’t understand the advantage of writing foo[0] and foo[1] over foo0 and foo1. And in some cases there isn’t really one. But of course arrays are very useful in general.


GATs seem easily understandable to me and I'm a "mere mortal". Like most things in software, the utility becomes clear when you actually have a practical purpose for its use. Don't get ahead of yourself.


There are other people, who are also mere mortals, that can understand them. I think you could understand them if you tried. You're not a dunce!


> The entire point of GATs was to carve out a design space that solved peoples' problems without being so high-minded as monads and HKT and so on.

High-minded or snobbish or whatever other deragotory words that one wants to use: the benefit of something like HKT is that it encompasses one thing that Rust now currently ends up catching up to by implementing dozens of “X with Y” (“generic const in associated lifetimes positions”… to use a made up feature) that are just about lifting limitations, and that end up sounding “complicated” and “featureful” (kitchen sink accusations) to anyone who isn’t knee deep in building an async runtime library or whatever.

Meanwhile a Haskell programmer might go years and never think about HKT as a feature. It’s just “kinds” without artificial-looking limitations.


Before writing a completely asinine comment like this, you might consider that I, having been paid real American dollars to design this language, might know more than you about the relationship between GATs and HKTs and the design of Rust as a whole. Your comment evinces a total ignorance of the type theoretical issues that actually informed our decision on this issue.

Fortunately, Niko Matsakis blogged about our design discussions at the time. You can read more here and in the linked predecessor posts (at the time we were calling GATs "ATCs") https://smallcultfollowing.com/babysteps/blog/2016/11/04/ass...


Yes, your attitude (shall we say) was apparent from the start.


> Meanwhile a Haskell programmer might go years and never think about HKT as a feature. It’s just “kinds” without artificial-looking limitations.

And yet, at the same time, Haskell has many, many extensions to enable certain features that would come for free in a dependently typed language, for example. I find Idris's type system easier to fit in my head, for example.

There's always a next level of generality in which previously complicated concepts can be expressed more simply, but there are also sometimes reasons not to want to reach that level of abstraction, for various reasons.

(Although in principle, I agree. I don't find the concept of HKTs particularly complicated per se.)


> And yet, at the same time, Haskell has many, many extensions to enable certain features that would come for free in a dependently typed language, for example.

For sure.

In Rust’s case though it seemed that insiders were saying that HKT was more than the language needed, while now they keep running into limitations which necessitates patching up feature limitations. And patching up the language itself, not compiler extensions (maybe rustc is the only compiler that people use (?) but the updates are really to the language (in the abstract) itself).

But in any case, I’ve been told that I’m just talking out of my behind ;)


> Async support is incredibly half-baked. It was released as an MVP, but that MVP has not improved notably in almost three years.

As the primary mover of the MVP (who stopped working on Rust shortly after it was launched), I'm really sad to see this. I certainly didn't imagine that 3 years later, none of the next steps past the MVP would have even made it into nightly. I don't want to speculate as to why this is.

I also recommend avoiding async if you don't need. Unfortunately for people who don't need it but do want to do a bit of networking, a huge part of the energy behind Rust is in cloud data plane applications, which do need it.


> As the primary mover of the MVP (who stopped working on Rust shortly after it was launched)

At the risk of rehashing something that has already been discussed to death, did you stop working on Rust because of the difficulty of launching that MVP? I imagine that all of the arguments involved, particularly about things that are prone to bike-shedding like the await syntax, could be exhausting.

Any idea where we should send money to get things moving again? Lack of money is always the biggest problem in open source, right?

FWIW, I only started seriously using Rust in the past year. I quietly watched and waited while the async/await MVP was being developed. I didn't participate in any discussions. On the one hand, that means I didn't exacerbate any exhausting arguments. On the other hand, I wasn't actively supportive either.


That's not why I stopped working on Rust. Given that Amazon, Google, Microsoft and others all employ people to work on Rust, lack of money is certainly not the problem. There has never been more money in Rust development.


Java's NPE is not at all the same thing as the undefined behavior of dereferencing a null pointer in C (or Zig in release mode). In fact its the same exact thing as calling unwrap. Your Java server's NPE is not bringing down the entire server because the exception is caught somewhere. The same can be done in Rust (and is, by many frameworks), where panics are caught before they crash the entire application.


> or Zig in release mode

My understanding is that Zig doesn't allow pointers to be null in the first place (regardless of release mode) unless you're 1) manually creating a pointer from an integer or 2) interfacing with C, and in both of those cases all bets are already off anyway (as they would be in Rust). The only "supported" options outside of that would be a non-null pointer or None.


This is the opposite of reality: in Rust, the safe syntax `?` exists and there is no short syntax for the panicking form (`.unwrap()`). Users are not incentivized to unwrap in Rust, users are as disincitivized as possible to unwrap.

The uses of unwrap in the projects the GP cited are overwhelmingly in tests and examples, which are not expected to handle errors in the same way as application code. The remainder are mainly lock poisoning unwrapping, which is a completely endorsed idiom.


For errors the ? operator is both correct and easy most of the time.

But for options it's trickier. Unless the surrounding function is structured just right, the equivalent to Typescript's and Kotlin's ? operator is .map()/.and_then(), and that's pretty ugly. .unwrap() is easier.

Try blocks might help.


I disagree with your characterization that "?" is the "safe syntax"; both are safe but one aborts execution.


It is not at all like Rust; it is a garbage collection system based on reference counting, pretty similar to Swift. This is obscured by the documentation for each using similar terminology (like "move semantics" and "ownership") to refer to systems with different characteristics.


Thanks for clarification withoutboats2! It's really confusing as Nim's move semantics "feel" very similar to Rust's (except that Nim is more relaxed regarding "use after move" - it simply performs a copy). I'll need to learn a lot more about the different concepts.


As others have written, there's basically no connection between Rust's six week release cycle & the rate of actual change to Rust. If Rust switched to a 12 week release cycle, all that would change is that it would take twice as long for features to reach users on stable once they were decided to be released. Feature development is neither release driven nor release constrained.

However, the ground truth is that the design of significant new language features basically stopped in 2018. The work now is shepherding through the language extensions the project already committed to in the first few years after 1.0. For example, const generics, which will have a stable MVP in the next release, was first begun in 2017. Generic associated types, which are still not near stable, in 2016.

And my own view, these remaining features are a nearly "feature complete" Rust. Future extensions will be either more niche (a lot of attention in the last few years has been aimed at features to make writing unsafe code less error prone, which wouldn't visibly impact users who write only safe code) or less substantial (such as a syntactic sugar that would improve a lot of users lives, but not deeply change how they write code). And that's actually very good.


Imagine if Rust only did releases every 18 or 36 months like other popular languages, or every six months like OP wants. Features that haven't been fully baked would be pushed in to the current release because people wouldn't want to wait another 6 months to see their work come to fruition. When the next release is only six weeks away, the cost of saying "aight, let's take our time and make sure it's ready before sending it out" is so small. The MVP of Const generics has been delayed by 6 weeks, upsetting exactly 0 people.

Regular releases of the compiler are awesome for the same reason continuous deployment of web apps is - each release only packs a small number of features. The chances of something breaking are much smaller than if it was a massive release of hundreds of features, all of which need to be tested in isolation and then with each other. The releases page (https://github.com/rust-lang/rust/blob/master/RELEASES.md) shows how rare it is for a fix release to be made. Even when it is, it's something minor.

Making the language easier to learn is a high priority. Many people are working on this actively right now - better documentation, better error messages, better IDE experience and so on. OP has misdiagnosed the root cause and blamed something that actually makes it easier to learn the language.


> Features that haven't been fully baked would be pushed in to the current release because people wouldn't want to wait another 6 months to see their work come to fruition.

This psychological tinkering isn't helpful. People can push unfinished stuff in any situation. E.g. you can close it off 5 months in and then stabilise for a month.

I don't think the OP is having trouble learning. They're having trouble with the almost-monthly changes.


I was at Mozilla when we switched from 3-to-6-monthly releases to 6-weekly releases. It really was an enormous positive improvement.

> you can close it off 5 months in and then stabilise for a month.

This doesn't help. If a release happens every six months then regardless of what you do during the release cycle, you are faced with situations where someone has a work item just about ready to ship in release X, but not quite, and you face enormous pressure to ship it in release X rather than in release X+1 six months later. On the flip side if you do force it to slip to release X+1 then that's several months of users not having the benefit even though it's ready.

If you're releasing every six weeks, there isn't any such pressure. Less work is shipped prematurely, less work is shipped later than necessary, and developers are less stressed.


At the company I worked at right now we went from releasing every sprint at a specific release window. To lets release whenever we want because we have some blue-green support.

And it has been a game changer if you ask me, Jira tickets don't pile up in the test lane, testers don't have to do one big regression tests at the end of the sprint just to sure. Sometimes even making them work overtime just to be sure. The code area that introduced new bugs is way smaller and the bugs have become easier to trace down.


> People can push unfinished stuff in any situation.

But they don't in Rust. My example of Min Const Generics is an example of this. Async-Await (developed by GP on this thread) was also delayed to give it more time to bake.

I urge you to look at the releases page (https://github.com/rust-lang/rust/blob/master/RELEASES.md). Look at how few user facing changes are made with every release. This is a good thing! As long as you're ok with skipping these and the performance improvements, you can stay on an older release for basically forever. Most Rust libraries are conservative about their minimum supported Rust version (MSRV) so there isn't usually a push to upgrade at all.

If the OP wants a compiler and toolchain that only changes every six months, that's already available. He can stick with the same compiler for six months or even longer with 0 downsides.


> If the OP wants a compiler and toolchain that only changes every six months, that's already available. He can stick with the same compiler for six months or even longer with 0 downsides.

Sure, but there's also things like how long will the current version be supported if there are security issues, or similar. People also want a version that will be supported with bug/security fixes for 6 or 12 months vs "Oh it's broken? Upgrade to head."


6 months in an enterprise setting means that we will never touch Rust.


That somehow rapid-release is antithetical to the enterprise is a very archaic view of enterprise computing. Enterprises use cloud HR software, Office suites (MS365 and Google Workspace), Salesforce, etc all the time and those update on a fairly frequent basis. Also, given the need to patch promptly, most forward-looking enterprises have much more responsive systems in place -- including investments in good CI/CD systems to deliver even internal software faster. So they're quite comfortable with the "new normal" of the software industry.

The most used enterprise OS, Windows, is rapid release too -- every 6 months, coincidentally! But there's a LTSB branch you can use. Java releases every 6 months too, and that hasn't halted enterprise adoption. Similarly with C#, not every team jumps to the latest language level just because it was released. Teams should pick language updates at a speed that suits them -- but hopefully not so slowly that moving to newer releases becomes difficult.

Similarly with Go. Why should it be any different with Rust? To the extent that enterprises are using the latter two in massive numbers. The key thing is, as others have noted, is stability and a lack of breaking changes.


>with C#, not every team jumps to the latest language level just because it was released.

Exactly when it comes to Java, even though there is now a 6 month release cadence, take a survey of which version most are using. Certainly every time I bring this up, someone pops up and says, "I'm using 11" or "10". But the vast majority of "enterprises" are at max, Java 8. And they'll remain on 8 until something forces them off of it. Enterprises by nature move at a glacial pace. So their reason for adopting Java is not because it's "moving fast".


I've come to dislike versioning stuff more and more. It's an invitation for people to get stuck on a version. And as time goes on it becomes harder to upgrade to the latest version so the trap continues to tighten. Enterprisey stuff should choose things that prioritize backwards compatibility and well tested changes. Settling on a LTS version seems attractive but it's a trap and should be avoided.


> So their reason for adopting Java is not because it's "moving fast".

I was replying to the parent poster, who said that moving fast is a roadblock to enterprise adoption. Whether enterprises adopt Java because it's moving fast is another question altogether. Certainly no one is deserting Java because of the decision to boost its release cadence.

> Enterprises by nature move at a glacial pace

I think it depends on the enterprise.[1]

Cyber-Security has changed the game for many enterprises very fundamentally. Well-run enterprises now patch more promptly than many consumers -- at least, certainly the consumers who don't auto-update. Well, unless they want a Maersk-like incident[2], or be infected by ransomware. Maersk's incident cost it $200M if you believe Forbes.

About slow JVM upgrades, a lot of that is a cost/benefit calculation or poor technical leadership. CIOs and COOs (or audit, for that matter) have no interest in which Java version you use. The only questions are: Is it secure? Does it make commercial sense?

For actively developed software, staying on a recent-ish Java isn't a problem (many good enterprises have adopted Devops in some shape and form -- just add a task to your CI server to test on the latest JRE) once your engineering team has crossed the Java 8-to-11 hump. If you do that, you get to not pay Oracle $$$ per desktop/server every year (or you could use Corretto etc, but most enterprises will choose to pay Oracle). Or you could simply upgrade for the productivity & JVM improvements.

Legacy products not in active development are another matter, but even there -- the cost of not getting security updates is too high nowadays, so you have to spend money to secure it -- and legacy software has other costs too[3]. So unless you're a developer working on a legacy app, or an app who's design is so poor that it's locked into Java 8, there's no reason you couldn't be on Java 11 soon and get on the rapid cadence train yourself thereafter. It's really about how nimble your team is.

Of course, you could be unfortunate enough to work in an enterprise that doesn't take cyber-security seriously, in which case YOLO.

[1] https://news.ycombinator.com/item?id=25873325

[2] https://www.forbes.com/sites/leemathews/2017/08/16/notpetya-...

[3] https://arstechnica.com/tech-policy/2021/02/citibank-just-go...


Cloud hosted SaaS may release a lot faster but it also means you are not on the hook for supporting it.


There are generally speaking no breaking changes. Upgrading the toolchain practically never requires any changes to code.


> Upgrading the toolchain practically never requires any changes to code.

if you work on safety/security critical systems upgrading the toolchain is considered like changing the tires on a moving vehicle. We have introduced and many weeks later discovered critical (and hard to trace) bugs simply by changing -O2 to -O3 in gcc that ended in sporadic crashes when cross-compiling for 1 specific platform. (which is a change much less severe than upgrading gcc itself) And I'd be surprised if upgrading the toolchain wouldn't potentially introduce similar hard to trace issues in rust.

My point is that just because the first 2 months after upgrade look like there were no required changes in code doesn't mean there are none. I see rust being used in very large companies for internal projects, test harnesses, and other glue/plumbing. And the places I know are really reluctant to go all the way in and substitute their ~30 years of experience with C/C++ with something that remains a constantly moving target. To be fair there are other reasons such as culture, and distribution of skills across the team that drive these decisions but my main point still holds (i think).


> changing -O2 to -O3 in gcc that ended in sporadic crashes when cross-compiling for 1 specific platform.

Sure, this happens regularly in C and C++ because large programs inevitably depend on undefined behaviour, and changing compiler versions, optimization levels and target platforms is allowed to change that behaviour.

For the last five years I've been writing 99% safe Rust code, which is designed to have no undefined behavior, and I think it's not a coincidence that I can't remember ever having a compiler update break code at runtime. Once in a long while a compiler update will fail to compile some existing code, but that's not a safety issue.

The idea that C and C++ compilers are somehow more stable across releases is a mirage. https://lwn.net/Articles/845691/ https://lwn.net/Articles/845775/


No, this happens because bugs in compilers happen. Nothing to do with undefined behavior. (Which is a separate problem)


It can be both.


1) Sorry, but if you're talking safety critical, then you're talking Sealed Rust/Ferrocene which is talked about here: https://ferrous-systems.com/blog/sealed-rust-the-plan/

I mean, even with C, there is a very circumscribed subset that you use for "safety critical" applications.

2) People are banging on about Rust not being stable but the Rust Embedded guys DO lag for stability--so you can have stability if you choose. And Rust is by far the best language I have seen about being able to stay on old compiler versions and still use newer libraries.

3) I find crappy libraries to be the biggest threat to Rust's sustainability. There was a big "Gold Rush" mentality and a whole lot of unqualified people built a whole lot of crap libraries with no good way for someone to come along later and to flag "That's a crap library, please remove it from crates.io permanently".


If you don't upgrade the compiler at all, then why does it matter how fast it updates? You're not going to be using the new version anyway.


Depends on the libraries I guess, when that CVE fix is only available on a version that requires a compiler upgrade.


Eventually they will stop making the CPU we use and then the new CPU only works with a new compiler.


>-O2 to -O3 in gcc

This sounds more like undefined behavior in your codebase than a compiler error. I don't know Rust but I'm pretty sure the whole selling point is that you won't run into problems like this.


There are active open bugs today in GCC and clang for x86 where optimizations can break standards-correct code [0] (writing to a union through a pointer to a currently inactive member, which std::variant does internally). For more obscure platforms and older releases, there are bound to be numerous bugs. In particular, volatile handling is notoriously shoddy on many platforms.

As an aside, the clang bug would also affect Rust if they used that type of optimization (assuming strict aliasing rules).

[0] https://lists.isocpp.org/std-discussion/2020/10/0882.php


> As an aside, the clang bug would also affect Rust if they used that type of optimization (assuming strict aliasing rules).

No, this bug cannot affect rust, because rust wouldn't allow to pass into a function three mutable references pointing to the same object. It wouldn't even allow two, nor one mutable and one immutable. Either it pass one mutable reference, or any number of immutable.


This one (probably) wouldn’t, but there have been codegen bugs in Rust, some that have blocked people from upgrading before they can be fixed.

It doesn’t happen often, but it does happen.


Not sure if that bug explicitly, but the fact that clang in noalias mode assumes that two pointers to different variants of the same union are not aliased (if it can't see from the immediate context that they are pointers to the same union variable) can very much affect Rust - if you call one function with a &mut to one field, than another function with a &mut to another field, and the optimizer inlines both functions in a broader context and loses the union information, you may well see reordering issues even in pure safe Rust.

Anyway, Rust has found plenty of noalias bugs in clang, which is why it still doesn't compile with noalias optimizations turned on.


>And I'd be surprised if upgrading the toolchain wouldn't potentially introduce similar hard to trace issues in rust.

It absolutely happens, but less often than you would think.

Rust has a tool named "crater" which compiles and executing tests for every package uploaded to crates.io (and some which are just on github).

It's not perfect but supposedly it catches quite a lot of bugs.


> > Features that haven't been fully baked would be pushed in to the current release because people wouldn't want to wait another 6 months to see their work come to fruition.

> This psychological tinkering isn't helpful.

The former comment is thoughtfulness about the ways that a system of work influences contributors mindset and thereby project quality. Even if the specific conclusion was incorrect, the earnest attempt is necessary to project health. If you disagree, you probably want to short-sell Toyota stock.


What on earth is projecting health?


I read it the same way at first but maybe they mean "necessary to the health of the project".


Yes. By “project health”, I mean is “the health of the project”.


Here are the release notes for the last release: https://blog.rust-lang.org/2021/02/11/Rust-1.50.0.html

I'm curious, what in there would make someone have to deal with the changes?


Maintaining ~30 Rust applications and a few libraries at work, a handful of open-source projects, and a few embedded Rust personal projects, I didn't need to change anything at all after updating (which is the norm for most Rust releases).


In practice, the biggest things have been rustc and clippy lints starting detecting something that was either stylistically dubious or actually problematic previously existing in our codebase.


I cannot imagine anyone would have to change anything.


> Regular releases of the compiler are awesome for the same reason continuous deployment of web apps is - each release only packs a small number of features.

Developing a compiler for a complex language is very different from developing a webapp.


It sure is, but do you have any substantial reason to say why that would invalidate the point they are saying, "each release only packs a small number of features"?


>Features that haven't been fully baked would be pushed in to the current release because people wouldn't want to wait another 6 months to see their work come to fruition.

Only if Rust has an awful leadership team organizing those releases. Strong leadership and careful release planning nips this right in the bud. "Your feature was only 'finished' a week ago? No way in hell is it making the release. See you in six months."

The other view is that shipping less frequently encourages features to be shipped fully baked and matured, with plenty of time for careful thought and attention given to their design and implementation.


The point is, you build a process that encourages the outcome you want. A six-week release cycle makes it easier to do the right thing. Even with the six week cycle, we do this, with const generics being the latest example. But it's much, much easier to say "hey it's just six more weeks" than "hey it's just three more years," not just on the folks making the decision, but on the folks doing the work, and on the folks excited to use the feature.

> shipping less frequently encourages features to be shipped fully baked and matured,

Right, but this is just a view. A significant part of the industry believes that shipping less frequently discourages this. We've found that shipping more frequently allows us to take our time and fully bake and mature features in Rust.


Hi author here,

This is just a report of the voices of the people I'm converting to Rust because they are never heard.

The compound effect of small and fast changes is complexity.

Uncontrolled complexity is the thing we want to avoid as it's fatal to any project.

Yes there are editions. But from a newcomer's point of view they only add complexity.

I'm convinced that short release cycle and feature 'bloat' are related. I simply don't want Rust to become the new C++ (which is the feeling of a lot of people).


> I'm convinced that short release cycle and feature 'bloat' are related. I simply don't want Rust to become the new C++

I'm confused... C++ doesn't have a short release cycle and it became feature bloated, so how did that lead you to conclude that the short release cycle of Rust is causing Rust to become feature bloated like C++?


My opinion (programming with Rust since May 2015): Rust is complex but 2020 I discovered that in «This week in Rust» [0] many weeks pass with the section

    New RFCs
No new RFCs were proposed this week

This shows that the rapid pace of change already started to slow.

[0]: https://this-week-in-rust.org/


The new C++ is unquestionably preferred to the old C++ (when nothing was standard or built in, everybody spent ages trying to build Boost, and so on).

Software development / computer programming is complex. Hiding complexity is bad. News at 11. /s

So, that said newcomers are free to pick their poison regarding tech stack, language, etc. They can use something simple, easy, friendly, and sort of self-contained. Let's say Ruby on Rails, Django, or C# stuff. Inevitably they'll very soon run into the problem of managing dependencies, platforms, native code, security considerations, performance issues, etc.

Everything is a trade-off after all.


> I'm convinced that short release cycle and feature 'bloat' are related. I simply don't want Rust to become the new C++ (which is the feeling of a lot of people).

Most of the problems with C++ come from the long release cycle and cause the bloat you're talking about. C++ rushes out unfinished features (see std::visit for a great example) to make a release window resulting in unfinished half working features that are horrible to use full of footguns.


I hope your book does well.


That may be true, but there are already discussions about revisiting some already-implemented features - e.g. Async (https://blog.rust-lang.org/2021/03/18/async-vision-doc.html). So I'm not really optimistic that the pace of changes will slow down significantly...


I read the post you linked and it doesn't seem to say that at all... it's about how async can be improved in general, not revisiting existing features.

Did you mean to link somewhere more specific?


Exactly - this release train metaphor that Spotify uses explains this visually:

https://medium.com/pm101/spotify-squad-framework-part-i-8f74...


over-engineering in hope that the language does everything for everybody is my biggest worry and why my excitement to consider it as something I want to work with drops with every year. Considering how quickly the language evolves (which ought to be a good thing considering it's still new) I'm being put off by all these features that only few need, while at the same time things like async are broken by design (not to mention the opinionated tooling of cargo, rustup, etc that follow the same flawed principles as npm, rvm and even cpan).

Perhaps I'm being too conservative but I'd have more trust in the language if things were moving slower especially that rust positions itself as a systems programming language fit for production (today). Which is odd because what I want from something that ticks these 2 boxes is interface stability. I have more confidence in Zig or Nim for this reason. Give it another 5 years and it will be a similar mess as C++, or worse: Python (which ended up rolling out v3 which wasn't backward compatible with v2). Also the "safety/security" argument which is their big selling-point which goes out the window as soon as you pull 3rd party crates (as is the case in most of the projects).


Suggesting that Rust will follow Python by forking the language incompatibly is completely unfounded. No-one has shown any interest in that, and the edition system was designed specifically to avoid any need for that.

> Also the "safety/security" argument which is their big selling-point which goes out the window as soon as you pull 3rd party crates

Not at all. Rust's safety guarantees work in practice. In years working on our largish Rust project we practically never have had to deal with memory corruption or data races.


> Not at all. Rust's safety guarantees work in practice.

They don't just work in practice. I spent 3 months last semester being taught in the separation logic "Iris", which is used to formally prove the safety guarantees of Rust as part of the RustBelt[0] project. That was under Lars Birkedal, for anyone curious.

Rust is a language I love doing actual work in, but it's also one I really like from the perspective of the theoretical backing.

  [0] : https://plv.mpi-sws.org/rustbelt/


Fun fact: Lars and I were in the same PhD intake at CMU. Tell him Robert O'Callahan says hi.


It doesn't really matter to me what you, an internet stranger, are "excited to consider as something you want to work with." Instead, what you should consider is how incredibly boorish it is to post that "async is broken by design" in reply to the person who designed async.


I’m not sure why “rust async bad” has become a meme lately. I think people who don’t know much about rust need something to put it down and async is just an easy target. I’ve been using rust professionally for 3 years now and think async is great, and certainly has come a long way. I think there’s some work around closures that could be made better (explicit lifetimes, matching, etc) and some ecosystem fragmentation (Tokio, async-std) but I’m sure everything will settle out in due time. Thanks for contributing - don’t let internet people get you down.


> I have more confidence in Zig or Nim for this reason

Do you realize that at this point Rust is way more stable than those two languages? It's not a critique at all as they are much younger, but these are really bad examples to compare with.


The first public release of Nim was June, 2008, but work began a few years earlier. So, Nim is at least a couple or more years older than Rust. Stability is often subjective (in terms of features people actually use).


> async are broken by design

That's, uh, a rather bold assertion. Perhaps you could elaborate on why you feel it's "broken by design"? I've never heard anyone take that position at all, much less make an implied claim that it's a widely-accepted position.


You likely won't get an answer from someone making hyperbolic claims like that. But it's probably this discussion - https://internals.rust-lang.org/t/unsoundness-in-pin/11311/4...


> not to mention the opinionated tooling of cargo, rustup, etc that follow the same flawed principles as npm, rvm and even cpan

Can you specify, what are these flawed principles? And what would be a better system? Thanks!


> I am not sure whether this is actually viable.

Having investigated this myself, I would be very surprised to discover that it is.

The only viable solution to make AsyncRead zero cost for io-uring would be to have required futures to be polled to completion before they are dropped. So you can give up on select and most necessary concurrency primitives. You really want to be able to stop running futures you don't need, after all.

If you want the kernel to own the buffer, you should just let the kernel own the buffer. Therefore, AsyncBufRead. This will require the ecosystem to shift where the buffer is owned, of course, and that's a cost of moving to io-uring. Tough, but those are the cards we were dealt.


Well, you can still have select; it "just" has to react to one of the futures becoming ready by cancelling all the other ones and waiting (asynchronously) for the cancellation to be complete. Future doesn't currently have a "cancel" method, but I guess it would just be represented as async drop. So this requires some way of enforcing that async drop is called, which is hard, but I believe it's equally hard as enforcing that futures are polled to completion: either way you're requiring that some method on the future be called, and polled on, before the memory the future refers to can be reused. For the sake of this post I'll assume it's somehow possible.

Having to wait for cancellation does sound expensive, especially if the end goal is to pervasively use APIs like io_uring where cancellation can be slow.

But then, in a typical use of select, you don't actually want to cancel the I/O operations represented by the other futures. Rather, you're running select in a loop in order to handle each completed operation as it comes.

So I think the endgame of this hypothetical world is to encourage having the actual I/O be initiated by a Future or Stream created outside the loop. Then within the loop you would poll on `&mut future` or `stream.next()`. This already exists and is already cheaper in some cases, but it would be significantly cheaper when the backend is io_uring.


> But then, in a typical use of select, you don't actually want to cancel the I/O operations represented by the other futures. Rather, you're running select in a loop in order to handle each completed operation as it comes.

You often do want to cancel them in some branches of the code that handles the result (for example, if they error). It indeed may be prohibitively expensive to wait until cancellation is complete - because io-uring cancellation requires a full round trip through the interface, the IORING_OP_ASYNC_CANCEL op is just a hint to the kernel to cancel any blocking work, you still have to wait to get a completion back before you know the kernel will not touch the buffer passed in.

And this doesn't even get into the much better buffer management strategies io-uring has baked into it, like registered buffers and buffer pre-allocation. I'm really skeptical of making those work with AsyncRead (now you need to define buffer types that deref to slices that are tracking these things independent of the IO object), but since AsyncBufRead lets the IO object own the buffer, it is trivial.

Moving the ecosystem that cares about io-uring to AsyncBufRead (a trait that already exists) and letting the low level IO code handle the buffer is a strictly better solution than requiring futures to run until they're fully, truly cancalled. Protocol libraries should already expose the ability to parse the protocol from an arbitrary stream of buffers, instead of directly owning an IO handle. I'm sure some libraries don't, but that's a mistake that this will course correct.


> Well, you can still have select; it "just" has to react to one of the futures becoming ready by cancelling all the other ones and waiting (asynchronously) for the cancellation to be complete.

Right. Which is more or less what the structured concurrency primitives in Kotlin, Trio, and soon Swift are doing.


Alex Crichton started with a completion based Future struct in 2015. It was even (unstable) in std in 1.0.0:

https://doc.rust-lang.org/1.0.0/std/sync/struct.Future.html

Our async IO model was based on the Linux industry standard (then and now) epoll, but that is not at all what drove the switch to a polling based model, and the polling based model presents no issues whatsoever with io-uring. You do not know what you are talking about.


>Our async IO model was based on the Linux industry standard (then and now) epoll, but that is not at all what drove the switch to a polling based model

Can you provide a link to a design document or at the very least to a discussion with motivation for this switch outside of the desire to be as compatible as possible with the "Linux industry standard"?

>the polling based model presents no issues whatsoever with io-uring

There are no issues with io-uring compatibility to such extent that you wrote about a whole blog post about those issues: https://boats.gitlab.io/blog/post/io-uring/

IIUC the best solutions right now are either to copy data around (bye-bye zero-cost) or to use another Pin-like awkward hack with executor-based buffer management, instead of using simple and familiar buffers which are part of a future state.


https://aturon.github.io/blog/2016/09/07/futures-design/

The completion based futures that Alex started with were also based on epoll. The performance issues it presented had nothing to do any sort of impedence mismatch between a completion based future and epoll, because there is no impedence issue. You are confused.


Thank you for the link! But immideately we can see the false equivalence: completion based API does not imply the callback-based approach. The article critigues the latter, but not the former. Earlier in this thread I've described how I see a completion-based model built on top of FSMs generated by compiler from async fns. In other words, the arguments presented in that article do not apply to this discussion.

>The performance issues it presented had nothing to do any sort of impedence mismatch between a completion based future and epoll

Sorry, but what? Even aturon's article states zero-cost as one of the 3 main goals. So performance issues with strong roots in the selected model is a very big problem in my book.

>You do not know what you are talking about.

>You are confused.

Please, tone down your replies.


> Please, tone down your replies.

You cannot literally make extremely inflammatory comments about people's work, and accuse them of all sorts of things, and then get upset when they are mad about it. You've made a bunch of very serious accusations on multiple people's hard work, with no evidence, and with arguments that are shaky at best, on one of the largest and most influential forums in the world.

I mean, you can get mad about it, but I don't think it's right.


I found it highly critical but not inflammatory - though I'm not sure if I'd've felt the same way had they been being similarly critical of -my- code.

However, either way, responding with condescension (which is how the 'industry standard' thing came across) and outright aggression is never going to be constructive, and if that's the only response one is able to formulate then it's time to either wait a couple hours or ask somebody else to answer on your behalf instead (I have a number of people who are kind enough to do that for me when my reaction is sufficiently exothermic to make posting a really bad idea).

boats-of-a-year ago handled a similar situation much more graciously here - https://news.ycombinator.com/item?id=22464629 - so it's entirely possibly a lockdown fatigue issue - but responding to calmly phrased criticism with outright aggression is still pretty much never a net win and defending that behaviour seems contrary to the tone the rust team normally tries to set for discussions.


Of course I was more gracious to pornel - that remark was uncharacteristically flippant from a contributor who is normally thoughtful and constructive. pornel is not in the habit of posting that my work is fatally flawed because I did not pursue some totally unviable vaporware proposal.


I am not mad, it was nothing more than an attempt to urge a more civil tone from boats. If you both think that such tone is warranted, then so be it. But it does affect my (really high) opinion about you.

I do understand the pain of your dear work to be harshly criticized. I have experienced it many times in my career. But my critique intended as a tough love for the language in which I am heavily invested in. If you see my comments as only "extremely inflammatory"... Well, it's a shame I guess, since it's not the first case of the Rust team unnecessarily rushing something (see the 2018 edition debacle), so I guess such attitude only increases rate of mistake accumulation by Rust.


I do not doubt that you care about Rust. Civility, though, is a two-way street. Just because you phrase something in a way that has a more neutral tone does not mean that the underlying meaning cannot be inflammatory.

"Instead of carefully weighing advantages and disadvantages of both models," may be written in a way that more people would call "civil," but is in practice a direct attack on both the work, and the people doing the work. It is extremely difficult to not take this as a slightly more politely worded "fuck you," if I'm being honest. In some sense, that it is phrased as being neutral and "civil" makes it more inflammatory.

You can have whatever opinion that you want, of course. But you should understand that the stuff you've said here is exactly that. It may be politely worded, but is ultimately an extremely public direct attack.


>Earlier in this thread I've described how I see a completion-based model built on top of FSMs generated by compiler from async fns. In other words, the arguments presented in that article do not apply to this discussion.

I've been lurking your responses, but now I'm confused. If you are not using a callback based approach, then what are you using? Rust's FSM approach is predicated on polling; In other words if you aren't using callbacks, then how do you know that Future A has finished? If the answer is to use Rust's current systems, then that means the FSM is "polled" periodically, and then you still have "async Drop" problem as described in withoutboat's notorious article and furthermore, you haven't really changed Rust's design.

Edit: As I've seen you mention in other threads, you need a sound design for async Drop for this to work. I'm not sure this is possible in Rust 1.0 (as Drop isn't currently required to run in safe Rust). That said it's unfair to call async "rushed", when your proposed design wouldn't even work in Rust 1.0. I'd be hesitant to call the design of the entire language rushed just because it didn't include linear types.


I meant the callback based approach described in the article, for example take this line from it:

>Unfortunately, this approach nevertheless forces allocation at almost every point of future composition, and often imposes dynamic dispatch, despite our best efforts to avoid such overhead.

It clearly does not apply to the model which I've described earlier.

Of course, the described FSM state transition functions can be rightfully called callbacks, which adds a certain amount of confusion.

I can agree with the argument that a proper async Drop can not be implemented in Rust 1.0, so we have to settle with a compromise solution. Same with proper self-referential structs vs Pin. But I would like to see this argument to be explicitly stated with sufficient backing of the impossibility statements.


>Of course, the described FSM state transition functions can be rightfully called callbacks, which adds a certain amount of confusion.

No, I'm not talking about the state transition functions. I'm talking about the runtime - the thing that will call the state transition function. In the current design, abstractly, the runtime polls/checks every if future if it's in a runnable state, and if so executes it. In an completion based design the future itself tells the runtime that the value is ready (either driven by a kernel thread, another thread or some other callback). (conceptually the difference is, in an poll based design, the future calls waker.wake(), and in a completion one, the future just calls the callback fn). Aaron has already described why that is a problem.

The confusion I have is that both would have problems integrating io_uring into rust (due to the Drop problem; as Rust has no concept of the kernel owning a buffer), but your proposed solution seems strictly worse as it requires async Drop to be sound which is not guaranteed by Rust; which would make it useless for programs that are being written today. As a result, I'm having trouble accepting that your criticism is actually valid - what you seem to be arguing is that async/await should have never been stabilized in Rust 1.0, which I believe is a fair criticism, but it isn't one that indicates that the current design has been rushed.

Upon further thought, I think your design ultimately requires futures to be implemented as a language feature, rather than a library (ex. for the future itself to expose multiple state transition functions without allocating is not possible with the current Trait system), which wouldn't have worked without forking Rust during the prototype stage.


>In an completion based design the future itself tells the runtime that the value is ready

I think there is a misunderstanding. In a completion-based model (read io-uring, but I think IOCP behaves similarly, though I am less familiar with it) it's a runtime who "notifies" tasks about completed IO requests. In io-uring you have two queues represented by ring buffers shared with OS. You add submission queue entries (SQE) to the first buffer which describe what you want for OS to do, OS reads them, performs the requested job, and places completion queue events (CQEs) for completed requests into the second buffer.

So in this model a task (Future in your terminology) registers SQE (the registration process may be proxied via user-space runtime) and suspends itself. Let's assume for simplicity that only one SQE was registered for the task. After OS sends CQE for the request, runtime finds a correct state transition function (via meta-information embedded into SQE, which gets mirrored to the relevant CQE) and simply executes it, the requested data (if it was a read) will be already filled into a buffer which is part of the FSM state, so no need for additional syscalls or interactions with the runtime to read this data!

If you are familiar with embedded development, then it should sound quite familiar, since it's roughly how hardware interrupts work as well! You register a job (e.g. DMA transfer), dedicated hardware block does it, and notifies a registered callback after the job was done. Of course, it's quite an oversimplification, but fundamental similarity is there.

>I think your design ultimately requires futures to be implemented as a language feature, rather than a library

I am not sure if this design would have had a Future type at all, but you are right, the advocated approach requires a deeper integration with the language compared to the stabilized solution. Though I disagree with the opinion that it would've been impossible to do in Rust 1.


Doesn't work because it relies on caller-managed buffers. See withoutboats' post: https://without.boats/blog/io-uring/


It does not work in the current version of Rust, but it's not given that a backwards-compatible solution for it could not have been designed, e.g. by using a deeper integration of async tasks with the language or by adding proper linear types, thus all the discussions around reliable async Drop. The linked blog post takes for given that we should be able to drop futures at any point in time, which while being convenient has a lot of implications.


This post is completely and totally wrong. At least you got to ruin my day, I hope that's a consolation prize for you.

There is NO meaningful connection between the completion vs polling futures model and the epoll vs io-uring IO models. comex's comments regarding this fact are mostly accurate. The polling model that Rust chose is the only approach that has been able to achieve single allocation state machines in Rust. It was 100% the right choice.

After designing async/await, I went on to investigate io-uring and how it would be integrated into Rust's system. I have a whole blog series about it on my website: https://without.boats/tags/io-uring/. I assure you, the problems it present are not related to Rust's polling model AT ALL. They arise from the limits of Rust's borrow system to describe dynamic loans across the syscall boundary (i.e. that it cannot describe this). A completion model would not have made it possible to pass a lifetime-bound reference into the kernel and guarantee no aliasing. But all of them have fine solutions building on work that already exists.

Pin is not a hack any more than Box is. It is the only way to fit the desired ownership expression into the language that already exists, squaring these requirements with other desireable primitives we had already committed to shared ownership pointers, mem::swap, etc. It is simply FUD - frankly, a lie - to say that it will block "noalias," following that link shows Niko and Ralf having a fruitful discussion about how to incorporate self-referential types into our aliasing model. We were aware of this wrinkle before we stabilized Pin, I had conversations with Ralf about it, its just that now that we want to support self-referential types in some cases, we need to do more work to incorporate it into our memory model. None of this is unusual.

And none of this was rushed. Ignoring the long prehistory, a period of 3 and a half years stands between the development of futures 0.1 and the async/await release. The feature went through a grueling public design process that burned out everyone involved, including me. It's not finished yet, but we have an MVP that, contrary to this blog post, does work just fine, in production, at a great many companies you care about. Moreover, getting a usable async/await MVP was absolutely essential to getting Rust the escape velocity to survive the ejection from Mozilla - every other funder of the Rust Foundation finds async/await core to their adoption of Rust, as does every company that is now employing teams to work on Rust.

Async/await was, both technically and strategically, as well executed as possible under the circumstances of Rust when I took on the project in December 2017. I have no regrets about how it turned out.

Everyone who reads Hacker News should understand that the content your consuming is usually from one of these kinds of people: a) dilettantes, who don't have a deep understanding of the technology; b) cranks, who have some axe to grind regarding the technology; c) evangelists, who are here to promote some other technology. The people who actually drive the technologies that shape our industry don't usually have the time and energy to post on these kinds of things, unless they get so angry about how their work is being discussed, as I am here.


Thank you for this post. I have been interested in rust because of matrix, and although I found it a bit more intimidating than go to toy with, I was inclined to try it on a real project over go because it felt like the closest to the hardware while not having the memory risks of C. The co-routines/async was/is the most daunting aspect of Rust, and a post with a sensational title like the grand-parent could have swayed me the other way.

As an aside, It would be great to have some sort of federated cred(meritocratic in some way) in hackernews, instead of a flat democratic populist point system; it would lower the potential eternal September effect.

I would love to see a personal meta-pointing system, it could be on wrapping site: if I downvote a "waste of hackers daytime" article (say a long form article about what is life) in my "daytime" profile, I get a weighted downvoted feed by other users that also downvoted this item--basically using peers that vote like you as a pre-filter. I could have multiple filters, one for quick daytime hacker scan, and one for leisure factoid. One could even meta-meta-vote and give some other hackers' handle a heavier weight...


To hopefully make your day better.

I for one, amongst many people I am sure, am deeply grateful for the work of you and your peers in getting this out!


I second this. Withoutboats has done some incredible work for Rust.


For what it's worth, I agree 100% with the premise of withoutboats' post, based on the experience of having worked a little on Zig's event loop.

My recommendation to people that don't see how ridiculous the original post was, is to write more code and look more into things.


Please, calm down. I do appreciate your work on Rust, but people do make mistakes and I strongly belive that in the long term the async stabilization was one of them. It's debatable whether async was essential or not for Rust, I agree it gave Rust a noticeable boost in popularity, but personally I don't think it was worth the long term cost. I do not intend to change your opinion, but I will keep mine and reserve the right to speak about this opinion publicly.

In this thread [1] we have a more technicall discussion about those models, I suggest to continue that thread.

>I assure you, the problems it present are not related to Rust's polling model AT ALL.

I do not agree about all problems, but my OP was indeed worded somewhat poorly, as I've admitted here [2].

>Pin is not a hack any more than Box is. It is the only way to fit the desired ownership expression into the language that already exists

I can agree that it was the easiest solution, but I strongly disagree about the only one. And frankly it's quite disheartening to hear from a tech leader such absolutistic statements.

>It is simply FUD - frankly, a lie - to say that it will block "noalias,

Where did I say "will"? I think you will agree that it at the very least will it will cause a delay. Also the issue shows that Pin was not proprely thought out, especially in the light of other safety issues it has caused. And as uou can see by other comments, I am not the only one who thinks so.

>the content your consuming is usually from one of these kinds of people:

Please, satisfy my curiosity. To which category do I belong in your opinion?

[1]: https://news.ycombinator.com/item?id=26410359

[2]: https://news.ycombinator.com/item?id=26407565


> Please, calm down.

By the way, that will almost certainly be taken in a bad way. It's never a good idea to start a comment with something like "chill" or "calm down", as it feels incredibly dismissive.

> I do appreciate your work on Rust, but

There's a saying that anything before a "but" is meaningless.

This is not meant to critique the rest of the comment, just point out a couple parts that don't help in defusing the tense situation.


Thank you for the advice. Yes, I should've been more careful in my previous comment.

I have noticed this comment only after engaging with him in https://news.ycombinator.com/item?id=26410565 in which he wrote about me:

> You do not know what you are talking about.

> You are confused.

So my reaction was a bit too harsh partially due to that.


So why did you not present your own solutions to the issues that you criticized or better yet fix it with an RFC rather than declaring a working system as basically a failure (per your title). I think you wouldn't have 10% of the saltiness if you didn't have such an aggressive title to your article.


I have tried to raise those issues in the stabilization issue (I know, quite late in the game), but it simply got shut down by the linked comment with a clear message that further discussion be it in an issue on in a new RFC will be pointless.

Also please note that the article is not mine.


You know the F-35 is a disaster of a government project from looking at it, why not submit a better design? That isn't helpful. You might be interested in the discussion from here: https://news.ycombinator.com/item?id=26407770


Is it just me or you're supporting your parent's point of:

> ...the decision was effectively made on the ground of "we want to ship async support as soon as possible" [1].

When you write:

> Moreover, getting a usable async/await MVP was absolutely essential to getting Rust the escape velocity to survive the ejection from Mozilla...

This whole situation saddens me. I wish Mozilla could have given you guys more breathing room to work on such critical parts. Regardless, thank you for your dedication.


That is not a correct reading of the situation. async/await was not rushed, and does not have flaws that could have been solved with more time. async/await will continue to improve in a backward compatible way, as it already has since it was released in 2019.


Please keep going, Rust is awesome and one of the few language projects trying to push the efficient frontier and not just rolling a new permutation of the trade-off dice.


I've jumped on the Rust bandwagon as part of ZeroTier 2.0 (not rewriting its core, but rewriting some service stuff in Rust and considering the core eventually). I've used a bit of async and while it's not as easy as Go (nothing is!) it's pretty damn ingenious for language-native async in a systems programming language.

I personally would have just chickened out on language native async in Rust and told people to roll their own async with promise patterns or something.

Ownership semantics are hairy in Rust and require some forethought, but that's also true in C and C++ and in those languages if you get it wrong there you just blow your foot off. Rust instead tells you that the footgun is dangerously close to going off and more or less prohibits you from doing really dangerous things.

My opinion on Rust async is that it its warts are as much the fault of libraries as they are of the language itself. Async libraries are overly clever, falling into the trap of favoring code brevity over code clarity. I would rather have them force me to write just a little more boilerplate but have a clearer idea of what's going on than to rely on magic voodoo closure tricks like:

https://github.com/hyperium/hyper/issues/2446

Compare that (which was the result of hours of hacking) to their example:

https://hyper.rs/guides/server/hello-world/

WUT? I'm still not totally 100% sure why mine works and theirs works, and I don't blame Rust. I'd rather have seen this interface (in hyper) implemented with traits and interfaces. Yes it would force me to write something like a "factory," but I would have spent 30 minutes doing that instead of three hours figuring out how the fuck make_service_fn() and service_fn() are supposed to be used and how to get a f'ing Arc<> in there. It would also result in code that someone else could load up and easily understand what the hell it was doing without a half page of comments.

The rest of the Rust code in ZT 2.0 is much clearer than this. It only gets ugly when I have to interface with hyper. Tokio itself is even a lot better.

Oh, and Arc<> gets around a lot of issues in Rust. It's not as zero-cost as Rc<> and Box<> and friends but the cost is really low. While async workers are not threads, it can make things easier to treat them that way and use Arc<> with them (as long as you avoid cyclic structures). So if async ownership is really giving you headaches try chickening out and using Arc<>. It costs very very little CPU/RAM and if it saves you hours of coding it's worth it.

Oh, and to remind people: this is a systems language designed to replace C/C++, not a higher level language, and I don't expect it to ever be as simple and productive as Go or as YOLO as JavaScript. I love Go too but it's not a systems language and it imposes costs and constraints that are really problematic when trying to write (in my case) a network virtualization service that's shooting (in v2.0) for tens of gigabits performance on big machines.


I skimmed some of this, but are you asking why you need to clone in the closure? Because "async closures" don't exist at the moment, the closest you can get is a closure that returns a future, this usually has the form:

   <F, Fut> where F: Fn() -> Fut, Fut: Future
i.e. you call some closure f that returns a future that you can then await on. when writing that out it will look like:

   || {
    // closure
       async move {
          // returned future
       }
   }
`make_service_fn` likely takes something like this and puts it in a struct, then for every request it will call the closure to create the future to process the request. (edit: and indeed it does, it's definition literally takes your closure and uses it to implement the Service trait, which you are free to do also if you didn't want to write it this way https://docs.rs/hyper/0.14.4/src/hyper/service/make.rs.html#...)

The reason you need to clone in the closure is that is what 'closes over' the scope and is able to capture the Arc reference you need to pass to your future. Whenever make_service_fn uses the closure you pass to it, it will call the closure, which can create your Arc references, then create a future with those references "moved" in.

It's a little deceptive as this means the exact same thing as above, just with the first set of curly braces not needed

   || async move {}
This is still a closure which returns a Future. Does all of that make sense? Perhaps they could use a more explicit example, but it also helps to carefully read the type signature.


Wait so you're saying "|| async move {}" is equivalent to "|| move { async move {} }"? If so then mystery solved, but that is not obvious at all and should be documented somewhere more clearly.

In that case all I'm doing vs. their example is explicitly writing the function that returns the promise instead of letting it be "inferred?"


Well, no, that second one isn't valid rust, perhaps you mean:

   move || async move {} 
But this is not equivalent to:

  || async move {}
crucially the closure is not going to take ownership of anything. This is kind of besides the point though, what I'm getting at is that both of the above are a closure which returns a future. i.e. you can also write them in this style:

    || {
       return async move {};
    }
Maybe that's more clear with the explicit return?

I don't understand your second question about it begin "inferred", I never used that word. make_service_fn is a convenience function for implementing the Service trait.


Ohhh.... I think I get it. The root of my confusion is that BRACES ARE OPTIONAL in Rust closures.

This is apparently valid Rust:

let func = || println!("foo!");

I didn't know that, which is why I thought "|| async move ..." was some weird form of pseudo-async-closure instead of what it is: a function that returns an async function.

Most of the code I see always uses braces in closures for clarity, but I now see that a lot of async code does not.


> I didn't know that, which is why I thought "|| async move ..." was some weird form of pseudo-async-closure instead of what it is: a function that returns an async function.

It does not return an async function, it is a closure that returns a future. Carefully read the function signature I had posted:

    fn foo<F, Fut>(f: F) where F: Fn() -> Fut, Fut: Future
async move {} is just a future, there is no function call. || is a closure, put them both together and you have a closure that returns a future.

edit: I'm trying to think of how else to explain this. a future is just a state machine, an expression, there is no function call.

   let f = async move { };
Is a valid future, you can f.await just fine.


You are awesome. Thank you for clarifying these things.


Thank you for your tremendous work!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: