Hacker Newsnew | past | comments | ask | show | jobs | submit | gjadi's commentslogin

“The doers are the major thinkers. The people that really create the things that change this industry are both the thinker and doer in one person.”

Steve Jobs

Now, what are doers in the age of LLM is another question.


Well was Jobs a "doer"? Did he get his hands dirty on the code? Or did he use his employees how we would like to use LLMs?

> Well was Jobs a "doer"?

Jobs' talent was that he was an incredibly talented salesman.


Salespeople sell things that already exist. If you can envision new things that would sell well, that's a bit more than sales talent

> Salespeople sell things that already exist. If you can envision new things that would sell well, that's a bit more than sales talent

A lot of gadgets that were claimed by Steve Jobs to have been envisioned by Apple (or rather: by him) - as I wrote: Steve Jobs was an exceptional salesman - already existed before, just in a way that had a little bit more rough edges. These did not sell so well, because the companies did not have a marketing department that made people believe that what they sell is the next big thing.


Have you ever heard of Steve Jobs?

That wasn't too hard for him given he was also an incredibly talented market opportunity spotter and product leader.

Why do people write such nonsense?

Jobs envisioned the iPad and iPhone. Did he do the physical work? No. But he created direction.

Everyone around him at that time has commented on this. Are you going to claim they’re all lying?


> Jobs envisioned the iPad and iPhone. [...] Everyone around him at that time has commented on this. Are you going to claim they’re all lying?

I don't claim that they are all lying, but I do claim that quite some people fell for Apple's marketing (as I wrote: "Jobs' talent was that he was an incredibly talented salesman.").


Because people only quote it partially.

> We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%.


The maintainer explained the reasoning for closing the issue quite well in a comment.


Not everyone is motivated by the highest wage they can get.

Good enough can be good enough and then aim for fun/interesting/challenging/fulfilling work instead of a fatter check.


>Not everyone is motivated by the highest wage they can get.

THis idealism always goes away once you have to buy a home, and realize you're working more hours and getting less money than your mates in other industries that are easier to get into, so you start to switch really quick.

People aren't selfless when it comes to being exploited by private sector entities, they'll always go towards the ones with the best wage/hour ratios.

People aren';t stupid. Why would they voluntarily choose to work harder and be less well off? It's not like this is working for the public good like medicine, firefighters, EMT, education, etc.


There is always more money elsewhere.

But once you have a home, enough to raise your family and save for later, when is enough enough?

And is the work fun? Fulfilling?

Money is a mean to an end.

Sure you can aim to earn enough to get FIRE asap. In my case, I aim for FIRE in the next 40y while maxing my fun in the meantime :)


IMHO, the real problem is that they create an even greater dissonance between online life and IRL.

Think about dating apps, pictures could be fake, and now words exchanged can be fake too.

You thought you were arguing with a gentle and smart colleague by chat and mails, too bad, when you meet then at a conference or at a restaurant you find them very unpleasant.


This.

For me, navigating with shortcuts feels like I can keep my inner monologue, it is part of it, maybe because I can spell it?

Dunno, but reaching for the mouse and navigating around breaks that, even if it can be more convenient for some actions.


Interesting argument.

But isn't the corrections of those errors that are valuable to society and get us a job?

People can tell they found a bug or give a description about what they want from a software, yet it requires skills to fix the bugs and to build software. Though LLMs can speedup the process, expert human judgment is still required.


I think there's different levels to look at it.

If you know that you need O(n) "contains" checks and O(1) retrieval for items, for a given order of magnitude, it feels like you've all the pieces of the puzzle needed to make sure you keep the LLM on the straight and narrow, even if you didn't know off the top of your head that you should choose ArrayList.

Or if you know that string manipulation might be memory intensive so you write automated tests around it for your order of magnitude, it probably doesn't really matter if you didn't know to choose StringBuilder.

That feels different to e.g. not knowing the difference between an array list and linked list (or the concept of time/space complexity) in the first place.


My gut feeling is that, without wrestling with data structures at least once (e.g. during a course), then that knowledge about complexity will be cargo cult.

When it comes to fundamentals, I think it's still worth the investment.

To paraphrase, "months of prompting can save weeks of learning".


I think the kind of judgement required here is to design ways to test the code without inspecting it manually line by line, that would be walking a motorcycle, and you would be only vibe-testing. That is why we have seen the FastRender browser and JustHTML parser - the testing part was solved upfront, so AI could go nuts implementing.


I partially agree, but I don’t think “design ways to test the code without inspecting it manually line by line” is a good strategy.

Tests only cover cases you already know to look for. In my experience, many important edge cases are discovered by reading the implementation and noticing hidden assumptions or unintended interactions.

When something goes wrong, understanding why almost always requires looking at the code, and that understanding is what informs better tests.


Another possibility is to implement the same spec twice, and do differential testing, you can catch diverging assumptions and clarify them.


Isn't that too much work?

Instead, just learning concepts with AI and then using HI (Human Intelligence) & AI to solve the problem at hand—by going through code line by line and writing tests - is a better approach productivity-, correctness-, efficiency-, and skill-wise.

I can only think of LLMs as fast typists with some domain knowledge.

Like typists of government/legal documents who know how to format documents but cannot practice law. Likewise, LLMs are code typists who can write good/decent/bad code but cannot practice software engineering - we need, and will need, a human for that.


It's ego won't get in the way but it's lack of intelligence will.

Whereas a junior might be reluctant at first, but if they are smart they will learn and get better.

So maybe LLM are better than not-so-smart people, but you usually try to avoid hiring those people in the first place.


That's exactly the thing. Claude Code with Opus 4.5 is already significantly better at essentially everything than a large percentage of devs I had the displeasure of working with, including learning when asked to retain a memory. It's still very far from the best devs, but this is the worse it'll ever be, and it already significantly raised the bar for hiring.


> but this is the worse it'll ever be

And even if the models themselves for some reason were to never get better than what we have now, we've only scratched the surface of harnesses to make them better.

We know a lot about how to make groups of people achieve things individual members never could, and most of the same techiques work for LLMs, but it takes extra work to figure out how to most efficiently work around limitations such as lack of integrated long-term memory.

A lot of that work is in its infancy. E.g. I have a project I'm working on now where I'm up to a couple of dozens of agents, and ever day I'm learning more about how to structure them to squeeze the most out of the models.

One learning that feels relevant to the linked article: Instead of giving an agent the whole task across a large dataset that'd overwhelm context, it often helps to have an agent - that can use Haiku, because it's fine if its dumb - comb the data for <information relevant to the specific task>, and generate a list of information, and have the bigger model use that as a guide.

So the progress we're seeing is not just raw model improvements, but work like the one in this article: Figuring out how to squeeze the best results out of any given model, and that work would continue to yield improvements for years even if models somehow stopped improving.


It's like pot.

Back in the day, it was much less concentrated and less dangerous than what you can get today.


It's much easier to forbid something to a subset of the population than to the population at large.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: