Yes, but that is a slightly different question: how long you do you keep something in the standard after all the relevant hardware has disappeared, e.g,. is there a framework for periodically re-evaluating decisions in light of the changing hardware landscape.
My question was more about when behavior is being defined for the first time, which admittedly doesn't happen that often (but it could apply e.g., when thing fixed-width integer types, uintX_t and friends were introduced).
Original standard feature specifications were not meant to obtain a 1-to-1 map from C onto hardware, but we used practical experience to judge what overhead was acceptable for the kinds of processors we had seen or thought were reasonable choices that the architects might make in the not too distant future. If a frequently-executed action had to (for example) check for a special condition every time, the overhead might increase by several percent, depending on the instruction set architecture. So quite often we argued that "if the programmer wants to test for that condition, he can do so, but typically it is a waste of cycles". There are a lot of such trade-offs; maybe we should write a paper or book on this topic.