This is touched on in Olivier Giroux' talk on forward progress in C++[1]. I've time-stamped the section on the roach motel problem, which ends with the observation that the execution model should capture "as-if isolated for a finite time." But even if C++ were to be improved in this way, you'd still only get the guarantee if you expressed the write as an atomic. There are no guarantees for non-atomic writes and shouldn't be.
But this gets to the deeper problem of the blog. There are three ways you can reason about these kinds of problems:
1. What happens with the execution of assembly language on the chip? Behavior is pretty well specified by the ISA, and you absolutely can reason operationally. The x86 memory model includes TSO, so you get fairly strong guarantees; it's basically acquire/release "for free," so you know you won't get tearing or other such things.
2. The formal memory model provided by the language. In the case of C and C++, it's also pretty well specified, and there's lots of work that's gone into it over many years. Again, it's possible to reason productively in this world, though it's quite different than case 1 - ultimately you've got causality graphs and other things expressed on an abstract machine. Ideally you'd do this work with formal proofs, but model checkers like CDSChecker can help a lot.
3. Informal reasoning based on an intuitive model of what the computer "should" do. This is serious YOLO territory, and basically a guarantee that whatever you write will be broken, possibly leading to serious security vulnerabilities. It's popular among a subset of confidently wrong HN commenters though, who I expect will come out in force in this thread.