> The vagueness comes from who the "developer" is when the LLM goes awry. Is it OpenAI's fault if a third-party app has a slip up, or is the third party? If a research lab puts out a new LLM that another company decides to put in their airplane that crashes, can the original lab be liable or are they only liable if they claim it to be an OSS airplane LLM?
Doesn't seem that vague to me. The law says:
> (b) In an action against a defendant that developed or used artificial intelligence
IANAL, but the law doesn't say who is liable, it says who cannot use this as a defense in a civil suit to escape damages. So neither OpenAI nor the third party could, from my read, and either one could be found liable depending on who a lawsuit targets.
Doesn't seem that vague to me. The law says:
> (b) In an action against a defendant that developed or used artificial intelligence
IANAL, but the law doesn't say who is liable, it says who cannot use this as a defense in a civil suit to escape damages. So neither OpenAI nor the third party could, from my read, and either one could be found liable depending on who a lawsuit targets.