Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Smells exactly like llm created solution.


Or just what happens when you hire a bunch of 20 year olds and let them loose.

That's currently how I model my usage of LLMs in code. A smart veeeery junior engineer that needs to be kept on a veeeeery short leash.


Yes. LLMs are very much like a smart intern you hired with no real experience who is very eager to please you.


IMO, they're worse than that. You can teach an intern things, correct their mistakes, help them become better and your investment will lead to them performing better.

LLMs are an eternal intern that can only repeat what it's gleaned from some articles it skimmed last year or whatever. If your expected response isn't in its corpus, or isn't in it frequently enough, and it can't just regurgitate an amalgamation of the top N articles you'd find on Google anyway, tough luck.


The Age of the Eternal Intern


LLMs are to interns what house cats are to babies. They seem more self sufficient at first, but soon the toddler grows up and you're stuck with an animal who will forever need you to scoop its poops.


And the content online is now written by Fully Automated Eternal September


Today is Friday the 11490th of September 1993.


Without a mechanism to detect output from LLMs, we’re essentially facing an eternal model collapse with each new ingestion of information from academic journals, to blogs, to art. [1][2]

[1] https://en.m.wikipedia.org/wiki/Model_collapse

[2]https://thebullshitmachines.com/lesson-16-the-first-step-fal...


> You can teach an intern things, correct their mistakes, help them become better and your investment will lead to them performing better.

You can't do the same way you do with a human developer, but you can do a somewhat effective form of it through things like .cursorrules files and the like.


Even at 20 years old I would not have done this.


The difference is that today's digital natives regard computers as magic and most don't know what's really happening when their framework du jour spits out some "unreadable" text.


So much this, I was interning at a government entity at 20 and I already knew you needed credentials to do shit. Most frameworks have this by default for free, we're so incredibly screwed with these folks running rampant and destroying the government.


One who thinks "open source" means blindly copy/pasting code snippets found online.


It's definitely both. A bunch of 20 year olds were let loose to be "super efficient." So, to be efficient they use LLMs to implement what should be a major government oversight webpage. Even after the fix the list is a few half-baked partial document excerpts with a few sentences saying, "look how great we are!" It's embarrassing.


Does it? At least my experience is that ChatGPT goes super hard on security, heavily promoting the use of best practices.

Maybe they used Grok ;P


> At least my experience is that ChatGPT goes super hard on security, heavily promoting the use of best practices.

Not my experience at all. Every LLM produces lots of trivial SQLI/XSS/other-injection vulnerabilities. Worse they seem to completely authorization business logic, error handling, and logging even when prompted to do so.


Post-edit window, the above should read “…completely skip authorization…”


Does it, though? The saying says we shouldn't mistake incompetence for malice, but that requires more than usual for Musk's retinue.

Smells like getting a backdoor in early.


Apparently they get backdoors in as incompetently as they create efficiency.


My first guess is that this is an unauthenticated server action.[0]

0 - https://blog.arcjet.com/next-js-server-action-security/


Maybe doge should have used an LLM to generate defenses


They did, and this is what they got.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: