Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No. What does this even mean? How would you make this actionable? LLM's are not "fact retrieval machines", and open AI is not presenting chat GPT as a legal case database. In fact they already have many disclaimers stating that GPT may provide information that is incorrect. If humans in their infinite stupidity choose to disregard these warnings, that's on them.

Regulation is not the answer.



LLM's are not fact retrieval machines you say? but https://openai.com/product claims:

"GPT-4 can follow complex instructions in natural language and solve difficult problems with accuracy."

"use cases like long form content creation, extended conversations, and document search and analysis."

and that's why we need regulations. In US, one needs FDA approval before claiming that a drug can treat some disease, the food preparation industry is regulated, vehicles are regulated and so on. Given existing LLMs marketing, this should have the same warnings, probably similar to "dietary supplements":

"This statement has not been evaluated by the AI Administration. This product is designed to generate plausible-looking text, and is not intended to provide accurate information"


GPT-4 can be used as an engine for document search and analysis, if you connect it to a database of documents to search and the right prompt to get it to search and analyze it.

The OpenAI chat frontend, for legal research, is not that.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: