ZeroFS doesn't exploit ZFS strengths with no native ZFS support, just an afterthought with NBD + SlateDB LSM
Good for small burst workloads where everything kept it in memory for LSM batch writes. Once compaction hits all bets off with performance and not sure about crash consistency since it is playing with fire.
ZFS special vdev + ZIL on ssd is much safer. No need for LSM. MayaNAS ZFS metadata at SSD speed and large blocks get throughput from high latency S3 at network speed.
LSMs are “for small burst workloads kept in memory”? That’s just incorrect. “Once compaction hits all bets are off” suggests a misunderstanding of what compaction is for.
“Playing with fire,” “not sure about crash consistency,” “all bets are off”
Based on what exactly? ZeroFS has well defined durability semantics, guarantees which are much stronger than local block devices. If there’s a specific correctness issue, name it.
IIRC the point is that each NBD device is backed by a different S3 endpoint, probably in different zones/regions/whatever for resiliency.
Edit: Oops, "zpool create global-pool mirror /dev/nbd0 /dev/nbd1" is a better example for that. If it's not that, I'm not sure what that first example is doing.
In context of real AWS S3, I can see raid 0 being useful in this scenario, but in mirror that seems like too much duplication and cross-region replication like this going to introduce significant latency[citation needed]. AWS provides that for S3 already.
Mirroring between s3 providers would seemingly give protection against your account being locked at one of them.
I expect this becomes most interesting with l2arc and cache (zil) devices to hold the working set and hide write latency. Maybe would require tuning or changes to allow 1m writes to use the cache device.