Think about how the three major recent incidents were caught: not by individual users installing packages but by security companies running automated scans on new uploads flagging things for audits. This would work quite well in that model, and it’s cheap in many cases where there isn’t a burning need to install something which just came out.
Quite possibly - there have been several incidents recently and a number of researchers working together so it’s not clear exactly who found something first and it’s definitely not as simple to fix as tossing a tool in place.
The CEO of socket.dev described an automated pipeline flagging new uploads for analysts, for example, which is good but not instantaneous:
The Aikido team also appear to be suggesting they investigated a suspicious flag (apologies if I’m misreading their post), which again needs time for analysts to work:
My thought was simply that these were caught relatively quickly by security researchers rather than by compromised users reporting breaches. If you didn’t install updates with a relatively short period of time after they were published, the subsequent response would keep you safe. Obviously that’s not perfect and a sophisticated, patient attack like liblzma suffered would likely still be possible but there really does seem to be a value to having something like Debian’s unstable/stable divide where researchers and thrill-seekers would get everything ASAP but most people would give it some time to be tested. What I’d really like to see is a community model for funding that and especially supporting independent researchers.
Wow so couldn't said security co's establish their own registry that we could point to instead and packages would only get updated after they reviewed and approved them?
I mean I'd prolly be okay paying yearly fee for access to such a registry.