I don't understand this comment. Spinning disks are still used very often in computing.
SSD/NVMe/etc drives are great for tiny amounts of storage, but if you need to process something substantial, like petabytes of data, you need to know how to write code to operate on spinning HDs.
Most people in ML or data science should default to writing for spinning disks, as they often have to deal with >>4TB of data.
You're imagining some specific focus on huge data that did not previously exist in the conversation, and does not actually seem relevant to today's real world. If you care about performance, you use SSDs, even if your dataset is large. Consumer 2TB SSDs have been under $100 for a while now. Enterprise SSDs can fit 1PB in a single 1U or 2U box. There are very few niches where hard drives are still relevant as something other than cold storage. Nothing I said implied that those niches don't exist, but my comment was mainly addressed at how understanding SSDs and their fundamental differences from hard drives is valuable knowledge for the more common use cases.
In particular, making good use of SSDs requires your application to be able to issue many simultaneous IO requests, which hard drives are relatively bad at handling and many IO APIs from before the SSD era make difficult or impossible.
SSD/NVMe/etc drives are great for tiny amounts of storage, but if you need to process something substantial, like petabytes of data, you need to know how to write code to operate on spinning HDs.
Most people in ML or data science should default to writing for spinning disks, as they often have to deal with >>4TB of data.