Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Why would this be an indictment of any specific database technology? If your disk fails and corrupts the filesystem, you're toast, regardless of what database you are using.


The technology to detect and recover from disk failures does exist. RAID and ZFS, for example.

I would not expect a disk failure to replicate to the backup.


Working with critical infrastructure and lack of true in depth oversight it wouldn’t surprise me DR plans were not ever executed or exercised in a meaningful manner.


This is quite common.

Comprehensive DR testing is really difficult. Many orgs settle for “on paper,” or “in theory” substitutions for real testing.

They do it right; no problem.

Doing it right, though … there’s the rub …


Yep and if you ship WAL transaction logs to standby databases/replicas, corrupt blocks or lost writes in the primary database won't be propagated to the standbys (unlike with OS filesystem or storage-level replication).

Edit: Should add "won't be silently propagated"


Neither checks the checksum on every read as that would be performance-prohibitive. So "bad data on drive -> db does something with corrupted data and saves corrupted transformation back to disk" is very much possible, just extremely unlikely.

But they said nothing about it being bad drive, just corrupted data file, which very well might be software bug or operator error


This is wrong, both ZFS and btrfs verify the checksum on every read.

It's not typically a performance concern because computing checksums is fast on modern hardware. Besides, historically IO was much slower than CPU.


> Neither checks the checksum on every read as that would be performance-prohibitive.

It is expensive. It might be prohibitive in a very competitive environment. This is hardly the case here. Safety first!


RAID does not really protect you from bit rot that tends to happen from time to time. ZFS might because it checksums the blocks. But if the corruption happens in memory and then it is transferred to disk and replicated, then from a disk perspective the data was valid.


> If your disk fails and corrupts the filesystem, you're toast, regardless of what database you are using.

There are databases that maintain redundant copies and can tolerate disk / replica failure. e.g. Cassandra.


journal databases are specifically designed to avoid catastrophic corruption in the event of disk failure. the corrupt pages should be detected and reported by the database will function fine without them


If you mean journaling file systems, no. They prevent data corruption in the case of system crash or power outage.

That's different from filesystems that do checksumming (zfs, btrfs). Those can detect corruption.

In any case, if you use a database it handles these things by itself (see ACID). However I don't believe they can necessarily detect disk corruption in all cases (like checksumming file systems).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: