Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

ChatGPT did exactly what it is supposed to do. The lawyers who cited them are fools in my opinion. Of course OpenAI is also an irresponsible company to enable such a powerful technology without adequate warnings. With each chatGPT response they should provide citations (like Google does) and provide a clearly visible disclaimer that what it just spewed may be utter BS.

I only hope the judge passes an anecdotal order for all AI companies to include the above mentioned disclaimer with each of their responses.



The remedy here seems to be expecting lawyers to do their jobs. Citations would be nice but I don’t see a reason to legislate that requirement, especially from the bench. Let the market sort this one out. Discipline the lawyers using existing mechanisms.


From the NYT article on it: https://www.nytimes.com/2023/05/27/nyregion/avianca-airline-...

> Judge Castel said in an order that he had been presented with “an unprecedented circumstance,” a legal submission replete with “bogus judicial decisions, with bogus quotes and bogus internal citations.” He ordered a hearing for June 8 to discuss potential sanctions.


There's no possible adequate warning for the current state of the technology. OpenAI could put a visible disclaimer after every single answer, and the vast majority would assume it was a CYA warning for purely legal purposes.


I have to click through a warning on ChatGPT on every session, and every new chat comes primed with a large set of warnings about how it might make things up and please verify everything.

It's not that there aren't enough disclaimers. It just turns out plastering warnings and disclaimers everywhere doesn't make people act smarter.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: