Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The reason is because they use "SMR", which severely hurts random-write performance.

It hurts random-write performance after a threshold, when an on-disk staging area becomes exhausted. For short bursts, the drive behaves well -- it probably would legitimately work in a RAID array if the array were initialized from a clean slate and not rebuilt.

> But this is yet another interesting edge case that RAID software doesn't handle and reacts by just blowing away your data.

It's not obvious how a RAID controller should "handle" this. The drives have no outward indication that they suffer from random write saturation. From the controller's perspective, the degraded performance looks very much like a drive failure.



> the degraded performance looks very much like a drive failure.

Sure, but given the choice between "During a rebuild, it looks like another drive isn't doing so well, so I should give up and trash the array"

and

"During a rebuild, it looks like another drive isn't doing so well, so I should notify the administrator and meanwhile try to maintain as much redundancy as I can"

which is the sensible choice?


There is no choice to be made. Once too many disks fail the entire array has to be taken offline and that's exactly what happens.


Disks "failing" is the problem. If you treat drive state as binary (flawless / eject), you can easily eject too many drives for errors on different 0.000001% of data, crashing the array along with the data.


A raid array that cant rebuild is near 100% worthless (outside of raid 0)


But if the reason it can't rebuild is that the drives are unfit for purpose, you can't really blame the controller.


Which is why the manufacturer shouldn't have branded them as NAS drives!


You're preaching to the choir. SMR drives are absolutely not NAS-ready, and it's insulting for WD to pretend otherwise.


> SMR drives are absolutely not NAS-ready

DM-SMR at the very least. If the RAID controller (hardware or software) is SMR-capable, then host-managed or host-aware can work.

For example, ZFS with the recent support for separate metadata devices[1] seems to be close to what you'd need for it to become SMR-capable.

[1]: https://github.com/openzfs/zfs/pull/5182


Fair distinction.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: