> Today it's "catastrophic risks." Tomorrow it's misinformation, hate speech, or whatever else the state decides needs "safety mitigations."
That's the problem.
I'm less worried about catastrophic risks than routine ones. If you want to find out how to do something illegal or dangerous, all an LLM can give you is a digest what's already available on line. Probably with errors.
The US has lots of hate speech, and it's mostly background noise, not a new problem.
"Misinformation" is more of a problem, because the big public LLMs digest the Internet and add authority with their picks. It's adding the authority of Google or Microsoft to bogus info that's a problem. This is a basic task of real journalism - when do you say "X happened", and when do you say "Y says X happened"? LLMs should probably be instructed to err in the direction of "Y says X happened".
"Safety" usually means "less sex". Which, in the age of Pornhub, seems a non-issue, although worrying about it occupies the time of too many people.
An issue that's not being addressed at all here is using AI systems to manipulate customers and provide evasive customer service. That's commercial speech and consumer rights, not First Amendment issues. That should be addressed as a consumer rights thing.
Then there's the issue of an AI as your boss. Like Uber.
Presumably things like making sure LLMs don’t do things like encourage self-harm or fuel delusions also falls under “safety”, but probably also “ethics”.
That's the problem.
I'm less worried about catastrophic risks than routine ones. If you want to find out how to do something illegal or dangerous, all an LLM can give you is a digest what's already available on line. Probably with errors.
The US has lots of hate speech, and it's mostly background noise, not a new problem.
"Misinformation" is more of a problem, because the big public LLMs digest the Internet and add authority with their picks. It's adding the authority of Google or Microsoft to bogus info that's a problem. This is a basic task of real journalism - when do you say "X happened", and when do you say "Y says X happened"? LLMs should probably be instructed to err in the direction of "Y says X happened".
"Safety" usually means "less sex". Which, in the age of Pornhub, seems a non-issue, although worrying about it occupies the time of too many people.
An issue that's not being addressed at all here is using AI systems to manipulate customers and provide evasive customer service. That's commercial speech and consumer rights, not First Amendment issues. That should be addressed as a consumer rights thing.
Then there's the issue of an AI as your boss. Like Uber.