Why would this be an indictment of any specific database technology? If your disk fails and corrupts the filesystem, you're toast, regardless of what database you are using.
Working with critical infrastructure and lack of true in depth oversight it wouldn’t surprise me DR plans were not ever executed or exercised in a meaningful manner.
Yep and if you ship WAL transaction logs to standby databases/replicas, corrupt blocks or lost writes in the primary database won't be propagated to the standbys (unlike with OS filesystem or storage-level replication).
Neither checks the checksum on every read as that would be performance-prohibitive. So "bad data on drive -> db does something with corrupted data and saves corrupted transformation back to disk" is very much possible, just extremely unlikely.
But they said nothing about it being bad drive, just corrupted data file, which very well might be software bug or operator error
RAID does not really protect you from bit rot that tends to happen from time to time.
ZFS might because it checksums the blocks.
But if the corruption happens in memory and then it is transferred to disk and replicated, then from a disk perspective the data was valid.
journal databases are specifically designed to avoid catastrophic corruption in the event of disk failure. the corrupt pages should be detected and reported by the database will function fine without them
If you mean journaling file systems, no. They prevent data corruption in the case of system crash or power outage.
That's different from filesystems that do checksumming (zfs, btrfs). Those can detect corruption.
In any case, if you use a database it handles these things by itself (see ACID). However I don't believe they can necessarily detect disk corruption in all cases (like checksumming file systems).