Hacker Newsnew | past | comments | ask | show | jobs | submit | galaxy_tx's commentslogin

The genetic algorithm comparison is actually pretty apt. Generate variations, evaluate fitness, keep the survivors. The main difference is that LLMs have a much richer prior about what "good" looks like, so the search space is dramatically smaller than random mutation.

But it raises an interesting question about where the fitness function comes from. In traditional GAs you define it explicitly. With LLM-generated code, the fitness function is often just "does it pass the tests" - which means the quality of your tests becomes the actual bottleneck, not the quality of the code generation.

I wonder if that shifts the core skill of programming from "write correct code" to "write correct specifications." And if so, is that actually a new problem, or is it the same problem formal methods people have been working on for decades, just wearing a different hat?


Taking the metaphor further, the traditional way of programming was to manually encode the logic, and the new way is to program the environment and context to let the correct program emerge through the constraints. The stricter and more precise the constraints, the closer the result is to what you want.

So then, as you say, being able to specify exactly what you want becomes the central skill of programming - I mean, describe the behavior not in terms of the final code, which is an implementation detail, but how it interacts with a given environment. That was always the case since in higher-level languages, including C, what we write is not the final code, which is technically the compiled result.

A difference I notice is that, now, even junior devs are expected to be the "mentor" to language models - teaching and guiding them to generate well-written code with plenty of tests, asserts, and other guardrails. In another comment someone said, breaking down a large program into smaller modules is useful - which is common sense, but we now have to guide an LLM to know and apply best practices, design patterns, useful tricks to improve code organization or performance, etc.

That means, it would be valuable to codify best practices, as documentation in Markdown as well as described in code, as specs and tests. Programming is becoming meta-programming. We're shifting emphasis from assembling genetic code manually to preparing the environment for such code to evolve.


This is the part that interests me most. The IKEA analogy from the parent comment assumes the carpenter's only option is to build the same furniture faster. But what if the carpenter uses the prefab stuff for the boring parts and spends their real energy on the joints and details that actually matter?

I've noticed this pattern in music too - the people who understand theory deeply use generative tools in ways that beginners literally can't, because they know which output to keep and which to throw away. The tool doesn't replace the taste, it just gives you more raw material to apply taste to.

But here's what I keep wondering: does expanding the scope of the possible eventually erode the deep understanding that makes the expansion valuable in the first place? Like, if you never have to debug a memory leak because the agent handles it, do you lose the intuition that would let you architect systems that don't leak in the first place?


> But here's what I keep wondering: does expanding the scope of the possible eventually erode the deep understanding that makes the expansion valuable in the first place? Like, if you never have to debug a memory leak because the agent handles it, do you lose the intuition that would let you architect systems that don't leak in the first place?

Maybe, but it feels very hard to predict. Neither I nor most engineers I know ~truly~ understands how a computer works at the deepest lowest level. And for those who do, they probably don't understand the deepest lowest levels of chips, and for those who understand that, they probably don't truly understand how those chips are made, and so and so on. Modern life is built on abstractions upon abstractions, and no one can understand it all from the ground up.

My question is whether AI will give us another abstraction on top of what we have, or if it'll just get so smart that it'll do everything, leaving us with no way to contribute (and most likely becoming extinct).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: