Google has had a bunch of notorious outages caused by similar things, including pushing a completely blank front-end load balancer config to global production. The post mortem action items for these are always really deep thoughts about the safety of config changes but in my experience there, nobody ever really fixes them because the problem is really hard.
For this kind of change I would probably have wanted some kind of shadow system that loaded the new config, received production inputs, produced responses that were monitored but discarded, and had no other observable side effects. That's such a pain in the ass that most teams aren't going to bother setting that up, even when the risks are obvious.
Actually now that I remember correctly, back when I was in that barber shop quartet in Skokie^W^W^W^W^W err, back when I was an SRE on Gmail's delivery subsystem, we actually did recognize the incredible risk posed by the delivery config system and our team developed what was known as "the finch", a tiny production shard that loaded the config before all others. It was called the finch to distinguish it from "the canary" which was generally used for deploying new builds. I wonder if these newfangled "best practices" threw the finch under the bus.