> Salespeople sell things that already exist. If you can envision new things that would sell well, that's a bit more than sales talent
A lot of gadgets that were claimed by Steve Jobs to have been envisioned by Apple (or rather: by him) - as I wrote: Steve Jobs was an exceptional salesman - already existed before, just in a way that had a little bit more rough edges. These did not sell so well, because the companies did not have a marketing department that made people believe that what they sell is the next big thing.
> Jobs envisioned the iPad and iPhone. [...]
Everyone around him at that time has commented on this. Are you going to claim they’re all lying?
I don't claim that they are all lying, but I do claim that quite some people fell for Apple's marketing (as I wrote: "Jobs' talent was that he was an incredibly talented salesman.").
> We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.
>Not everyone is motivated by the highest wage they can get.
THis idealism always goes away once you have to buy a home, and realize you're working more hours and getting less money than your mates in other industries that are easier to get into, so you start to switch really quick.
People aren't selfless when it comes to being exploited by private sector entities, they'll always go towards the ones with the best wage/hour ratios.
People aren';t stupid. Why would they voluntarily choose to work harder and be less well off? It's not like this is working for the public good like medicine, firefighters, EMT, education, etc.
IMHO, the real problem is that they create an even greater dissonance between online life and IRL.
Think about dating apps, pictures could be fake, and now words exchanged can be fake too.
You thought you were arguing with a gentle and smart colleague by chat and mails, too bad, when you meet then at a conference or at a restaurant you find them very unpleasant.
But isn't the corrections of those errors that are valuable to society and get us a job?
People can tell they found a bug or give a description about what they want from a software, yet it requires skills to fix the bugs and to build software. Though LLMs can speedup the process, expert human judgment is still required.
If you know that you need O(n) "contains" checks and O(1) retrieval for items, for a given order of magnitude, it feels like you've all the pieces of the puzzle needed to make sure you keep the LLM on the straight and narrow, even if you didn't know off the top of your head that you should choose ArrayList.
Or if you know that string manipulation might be memory intensive so you write automated tests around it for your order of magnitude, it probably doesn't really matter if you didn't know to choose StringBuilder.
That feels different to e.g. not knowing the difference between an array list and linked list (or the concept of time/space complexity) in the first place.
My gut feeling is that, without wrestling with data structures at least once (e.g. during a course), then that knowledge about complexity will be cargo cult.
When it comes to fundamentals, I think it's still worth the investment.
To paraphrase, "months of prompting can save weeks of learning".
I think the kind of judgement required here is to design ways to test the code without inspecting it manually line by line, that would be walking a motorcycle, and you would be only vibe-testing. That is why we have seen the FastRender browser and JustHTML parser - the testing part was solved upfront, so AI could go nuts implementing.
I partially agree, but I don’t think “design ways to test the code without inspecting it manually line by line” is a good strategy.
Tests only cover cases you already know to look for. In my experience, many important edge cases are discovered by reading the implementation and noticing hidden assumptions or unintended interactions.
When something goes wrong, understanding why almost always requires looking at the code, and that understanding is what informs better tests.
Instead, just learning concepts with AI and then using HI (Human Intelligence) & AI to solve the problem at hand—by going through code line by line and writing tests - is a better approach productivity-, correctness-, efficiency-, and skill-wise.
I can only think of LLMs as fast typists with some domain knowledge.
Like typists of government/legal documents who know how to format documents but cannot practice law. Likewise, LLMs are code typists who can write good/decent/bad code but cannot practice software engineering - we need, and will need, a human for that.
That's exactly the thing. Claude Code with Opus 4.5 is already significantly better at essentially everything than a large percentage of devs I had the displeasure of working with, including learning when asked to retain a memory. It's still very far from the best devs, but this is the worse it'll ever be, and it already significantly raised the bar for hiring.
And even if the models themselves for some reason were to never get better than what we have now, we've only scratched the surface of harnesses to make them better.
We know a lot about how to make groups of people achieve things individual members never could, and most of the same techiques work for LLMs, but it takes extra work to figure out how to most efficiently work around limitations such as lack of integrated long-term memory.
A lot of that work is in its infancy. E.g. I have a project I'm working on now where I'm up to a couple of dozens of agents, and ever day I'm learning more about how to structure them to squeeze the most out of the models.
One learning that feels relevant to the linked article: Instead of giving an agent the whole task across a large dataset that'd overwhelm context, it often helps to have an agent - that can use Haiku, because it's fine if its dumb - comb the data for <information relevant to the specific task>, and generate a list of information, and have the bigger model use that as a guide.
So the progress we're seeing is not just raw model improvements, but work like the one in this article: Figuring out how to squeeze the best results out of any given model, and that work would continue to yield improvements for years even if models somehow stopped improving.
Steve Jobs
Now, what are doers in the age of LLM is another question.
reply