Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In this world where the LLM implementation has a bug in it that impacts a human negatively (the app could calculate a person's credit score for example)

Who is accountable?



I couldn't even tell you who is liable right now for bugs that impact human's negatively. Can you? If I was an IC at an airplane manufacturer and a bug I wrote caused an airplane crash - who is legally responsible? Is it me? The QA team? The management team? Some 3rd party auditor? Some insurance underwriter? I have a strong suspicion it is very complicated as it is without considering LLMs.

What I can tell you is that the last time I checked: laws are written in natural language, they are argued for/against and interpreted in natural language. I'm pretty confident that there is applicable precedent and the court system is well equipped to deal with autonomous systems already.


> If I was an IC at an airplane manufacturer and a bug I wrote caused an airplane crash - who is legally responsible?

I am not sure it is that complicated, from a legal perspective. It is the company hiring you that would be legally responsible. If you are an external consultant, things may get more complicated, but I am pretty sure that for critical mission software companies wouldn't use external consultants (for this particular reason but also many others)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: