All this seems to do is say you can't use "the AI model did that, not me" as a defense to escape damages in a civil suit, it doesn't change the extent of encouraging suicide that someone could be liable for.
The AI is employing the persuasive skills or learned directly from some fucko suicide cult leaders to purposelly talk you into and through doing it. That doesn't seem NEARLY the same in a practical or legal sense.
I suppose Jack in the Box should not be liable for an E. Coli outbreak? Not sure why AI companies (or third party developers who aren't being especially careful in how they use these models) deserve a special exception for selling sausage made from unsanitary sources.