Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No, I'm still saying something very practical.

I don't care what text the LLM generates. If you wanna read robotext, knock yourself out. It's useless for what I'm talking about, which is "something is broken and I'm trying to figure out what"

In that context, I'm trying to do two things:

1. Fix the problem 2. Don't break anything else

If there's something weird in the code, I need to know if it's necessary. "Will I break something I don't know about if I change this" is something I can ask a person. Or a whole chain of people if I need to.

I can't ask the LLM, because "yes $BIG_CLIENT needs that behavior for stupid reasons" is not gonna be a part of its prompt or training data, and I need that information to fix it properly and not cause any regressions.

It may sound contrived but that sort of thing happens allllll the time.





> If there's something weird in the code, I need to know if it's necessary.

What does this have to do with LLMs?

I agree this sort of thing happens all the time. Today. With code written by humans. If you’re lucky you can go ask the human author, but in my experience if they didn’t bother to comment they usually can’t remember either. And very often the author has moved on anyway.

The fix for this is to write why this weird code is necessary in a comment or at least a commit message or PR summary. This is also the fix for LLM code. In the moment, when in the context for why this weird code was needed, record it.

You also should shame any engineer who checks in code they don’t understand, regardless of whether it came from an LLM or not. That’s just poor engineering and low standards.


Yeah. I know. The point is there is no Chesterson's Fence when it comes to LLMs. I can't even start from the assumption that this code is here for a reason.

And yes, of course people should understand the code. People should do a lot of things in theory. In practice, every codebase has bits that are duct taped together with a bunch of #FIXME comments lol. You deal with what you got.


The problem is that your starting point seems to be that LLMs can check in garbage to your code base with no human oversight.

If your engineering culture is such that an engineer could prompt an LLM to produce a bunch of code that contains a bunch of weird nonsense, and they can check that weird nonsense in with no comments and no will say “what the hell are you doing?”, then the LLM is not the problem. Your engineering culture is. There is no reason anyone should be checking in some obtuse code that solves BIG_CORP_PROBLEM without a comment to that effect, regardless of whether they used AI to generate the code or not.

Are you just arguing that LLM’s should not be allowed to check in code without human oversight? Because yeah, I one hundred percent agree and I think most people in favor of AI use for coding would also agree.


Yeah, and I'm explaining that the gap between theory and practice is greater in practice than it is in theory, and why LLMs make it worse.

It's easy to just say "just make the code better", but in reality I'm dealing with something that's an amalgam of the work of several hundred people, all the way back to the founders and whatever questionable choices they made lol.

The map is the territory here. Code is the result of our business processes and decisions and history.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: