The author of that post Nolan is a pretty interesting guy and deep in the web tech stack. He’s really one of the last people I’d call "tribal", especially since you mention React. This guy hand-writes his web components and files bug reports to browsers and writes his own memory leak detection lib and so on.
If such a guy is slowly dipping his toes into AI and comes to the conclusion he just posted, you should take a step back and consider your position.
I really don't care what authority he's arguing from. The "just try it" pitch here is fundamentally a tribalist argument: tribes don't want another tribe to exist that's viewed as threatening to them.
Trying a new technology seems like what engineers do (since they have to leverage technology to solve real problems, having more tools to choose from can be good). I'm surprised it rings as tribalist.
The impression I get from this post is that anyone who doesn't like it needs to try it more. It doesn't really feel like it leaves space for "yeah, I tried it, and I still don't want to use it".
I know what its capabilities are. If I wanted to manage a set of enthusiastic junior engineers, I'd work with interns, which I love doing because they learn and get better. (And I still wouldn't want to be the manager.) AIs don't, not from your feedback anyway; they sporadically get better from a new billion dollar training run, where "better" has no particular correlation with your feedback.
I think it's going to be important to track. It's going to change things.
I agree on your specific points about what you prefer, and that's fine. But as I said 15 years ago to some recent Berkeley grads I was working with: "You have no right to your current job. Roles change."
AI will get better and be useful for some things. I think it is today. What I'm saying is that you want to be in the group that knows how to use it, and you can't there if you have no experience.
Honestly that's what makes this all the more dangerous. He's trying to have his cake and eat it too: accept all of the hype and all of the propaganda, but then couch it in the rhetoric of "oh I'm so concerned I can remain in a sort of moderate & empathetic position and not fall prey to tribalism and flame wars."
There's no both-sides-ing of genAI. This is an issue akin to street narcotics, mass weapons of war, or forever chemicals. You're either on the side of heavy regulation or outright bans, or you're on the side of tech politics which are directly harmful to humanity. The OP is not a thoughtful moderate because that's not how any of this works.
> You're either on the side of heavy regulation or outright bans, or you're on the side of tech politics which are directly harmful to humanity.
I don't think this has yet been established. We'll have to wait and see how it turns out. My inclination is it'll turn out like most other technological advancements - short term pain for some industries, long term efficiency and comfort gain for humans.
Despite the anti-capitalist zeitgeist, more humans of today live like kings compared to a few hundred years ago, or even 100 years ago.
But you seem to have jumped to a conclusion that everyone agrees: AI is harmful.
If such a guy is slowly dipping his toes into AI and comes to the conclusion he just posted, you should take a step back and consider your position.