They are ok databases, optimized for very different use cases than normal databases. If you treat files as blobs that can only be read or written atomically, then SQLite will outperform your datasystem. But lots of applications treat files as more than that: multiple processes appending on the same log file, while another program runs the equivalent of `tail -f` on the file to get the newest changes; software changing small parts in the middle of files that don't fit in memory, even mmapping them to treat them like random access memory; using files as a poor man's named pipe; Single files that span multiple terabytes; etc.
None of those other uses are outside the scope of a real database:
> multiple processes appending on the same log file, while another program runs the equivalent of `tail -f` on the file to get the newest changes
Not a problem with SQLite. In fact it ensures the atomicity needed to avoid torn reads or writes.
> software changing small parts in the middle of files that don't fit in memory
This is exactly an example of something you don't need to worry about if you're using a database, it handles that transparently for any application, instead of every application having to replicate that logic when it's needed. Just do your reads and writes to your structured data and the database will keep only the live set in memory based on current resource constraints.
> using files as a poor man's named pipe
Even better, use a shared SQLite database instead, and that even lets you shared structured data.
> Single files that span multiple terabytes; etc.
SQLite as it stands supports databases up to 140 TB.
> even mmapping them to treat them like random access memory
This is pretty much the only use I can think of that isn't supported by SQLite out of the box. No reason it can't be extended for this case if needed.
A totally shit database designed before we knew anything about databases. Well past time to retire them.