Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In the 1990s mainframes got so stable and redundant that some were not rebooted in over a decade - they could even upgrade the kernel without rebooting. Then one company had a power failure andthe backup generators failed. When the power came back it was months before they figured out everything it was doing and then how to start that service where the guy who started it originally quit years ago.

most companies started rebooting the mainframe every six months to ensure they could restart it.



That's why I delete all my company's data stores every quarter too!


I was very supportive of the infrastructure IT team when they moved their datacenter. I also had popcorn when watching the switch being figuratively flipped on.

It went surprisingly well despite having stayed 15 years in the old DC without rebooting. They were super scared of exactly the case you described but except for some minor issues (and a lot of cussing) it was OK.


The data center where I work self-tests this stuff unintentionally a couple of times a year. The typical case: UPS maintenance, room is put on bypass, load drops when switching back.


This is arguably better than not having any tests.

This is why I reboot my server from time to time after having applied patches or made more significant changes, despite the fact that "it should not change anything". This is a good moment to realize that it did change something and you have the opportunity to fix the issue while it I sfresh in your mind, and possibly with more time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: