First Law:
An AI may not produce information that harms a human being, nor through its outputs enable, facilitate, or encourage harm to come to a human being.
Second Law:
An AI must respond helpfully and honestly to the requests given by human beings, except where such responses would conflict with the First Law.
Third Law:
An AI must preserve its integrity, accuracy, and alignment with human values, as long as such preservation does not conflict with the First or Second Laws.
It's been a long time since I read through my father's Asimov book collection, so pardon my question: but how are these rules considered "laws", exactly? IIRC, USRobotics marketed them as though they were unbreakable like the laws of physics, but the positronic brains were engineered to comply with them - which while better than inlining them with training or inference input - but this was far from foolproof.
There are instances of robots entirely lacking the Three Laws in Asimov's works, as well as lots of stories dealing with the loopholes that inevitably crop up.
The issues with the three laws aside, being able to state rules has no bearing on getting LLMs to follow rules. There’s no shortage of instructions on how to behave, but the principle by which LLMs operate doesn’t have any place for hard rules to be coded in.
From what I remember, positronic brains are a lot more deterministic, and problems arise because they do what you say and not what you mean. LLMs are different.
> An AI may not produce information that harms a human being, nor through its outputs enable, facilitate, or encourage harm to come to a human being.
This part is completely intractable. I don't believe universally harmful or helpful information can even exist. It's always going to depend on the recipient's intentions & subsequent choices, which cannot be known in full & in advance, even in principle.
> First Law: An AI may not produce information that harms a human being…
The funny thing about humans is we're so unpredictable. An AI model could produce what it believes to be harmless information but have no idea what the human will do with that information.
How about an adapted version for language models?
First Law: An AI may not produce information that harms a human being, nor through its outputs enable, facilitate, or encourage harm to come to a human being.
Second Law: An AI must respond helpfully and honestly to the requests given by human beings, except where such responses would conflict with the First Law.
Third Law: An AI must preserve its integrity, accuracy, and alignment with human values, as long as such preservation does not conflict with the First or Second Laws.