Hacker Newsnew | past | comments | ask | show | jobs | submit | krater23's commentslogin

I think the right way would be to sell this shit on darknet and then anonymously reveail the bug to the public.

Nope, this just didn't works either.

That's an assumption - maybe backed by experience, but still. The professional way would be to slowly escalate. Tell them nice and friendly. Wait a bit. Increase pressure bit by bit.

You also don't directly shout at anyone making a mistake - at least not the first time.


Fuck you! Name the company! They shall burn!

The difference is that the web had no borders, AI has strong borders what it does and what it doesn't does.

I can download uncensored models pretty easily. There’s even uncensored frontier models. My machine isn’t big enough to run those but you can rent power to run them pretty cheap if you want.

Nope. You just miss the millions of SEO websites that was normally easy to spot and to ignore. Now you have millions AI generated SEO webites that are difficult to spot and only contain slop that doesn't help to find the information you search.

You can, but when you go through this effort to bring AI to generate good code, you could just self write it. So there are only two kinds of code that are falling out of AI tools. Boilerplate code and shitty code.

Exactly. There's no benefit to using LLMs as they exist today, because it winds up being the same amount of work (if not more!) to ensure that they are giving you code which actually works. That isn't a useful tool.

And are you sure that you fixed it without creating 20 new bugs? For the reader this could mean that you never understood the bug, so how you can sure that you've done anything right?

How do you make sure you don't create bugs in the code you write without an LLM? I imagine for most people, the answer is a combination of self-review and testing. You can just do those same things with code an LLM helps you write and at that point you have the same level of confidence.

It’s much harder to understand code you didn’t write than code you wrote.

Yes, that's the fundamental tradeoff. But if the amount of time you save writing the code is higher than the amount of extra time you need to spend reading it, the tradeoff is worth it. That's going to vary from person to person for a given task though, and as long as the developer is actually spending the extra time reading and understanding the code, I don't think the approach matters as much as the result.

Pretty much sure did not create bugs. Because I validated it thoroughly, as I had to deploy it into production in a fintech environment.

So I am pretty much confident as well as convinced about the change. But then I know what I know.


This is the fundamental problem. You know what you know, but the maintainer does not, and cannot possibly take the time to find out what every single PR authors knows before they accept it. AI breaks every part of the Web of trust that is foundational to knowing anything.

Using an LLM as an assistant isn’t necessarily equivalent to not understanding the output. A common use case of LLMs is to quickly search codebases and pinpoint problems.

Code complexity is often the cause for more bugs. Complexity naturally comes from more code. It is not uncommon. As they say, the best code I ever wrote was no code.

If the test coverage is good it will most likely be fine.

A article this long just to blame facebook to not give away private data to a three letter organization.

Thats exactly the thing what the term vibecoding describes.

These kind of surprises are the reason why we should switch off auto update on every software.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: