I think it's just saying that AIs are treated like inanimate objects and thus not something that liability can apply to. Here's an analogy that I think illustrates the effect of the law, if I've understood it: let's say I drive my car into a house and damage the house, and the owner of the house sues me. Now, it's not a given that I'm personally liable for the damages, since it's possible for a car to malfunction and go out of control through no fault of the driver. However, if I walk into the court and say that the car itself should be held liable and responsible for the damages, I'm probably going to have a bad day. Similarly, I shouldn't be able to claim that an AI is responsible for some damages, since you can't frickin' sue an AI, can you?
The article goes on to ponder who's liable then, the developer of the AI, the user, or someone in between? It's a reasonable question to ask, but really not apropos to the law in question at all. That question isn't even about AI, since you can replace the AI with any software developed by a third party. In fact, the question isn't about software either, since you can replace "software" by any third-party component, even something physical. So I would expect that whatever legal methods exist to place liability in those situations, would also apply generally to AI models being incorporated into other systems.
Since people are asking whether this law is needed or useful at all: I would say either the law is completely redundant, or very much needed. I'm not a lawyer, so I don't know which of those two cases it is, but I suspect it's the second one. I would be surprised if by a few years from now we haven't seen someone try to escape legal liability by pointing their finger at an AI system they claim is autonomously making the decisions that caused some harm.
The article goes on to ponder who's liable then, the developer of the AI, the user, or someone in between? It's a reasonable question to ask, but really not apropos to the law in question at all. That question isn't even about AI, since you can replace the AI with any software developed by a third party. In fact, the question isn't about software either, since you can replace "software" by any third-party component, even something physical. So I would expect that whatever legal methods exist to place liability in those situations, would also apply generally to AI models being incorporated into other systems.
Since people are asking whether this law is needed or useful at all: I would say either the law is completely redundant, or very much needed. I'm not a lawyer, so I don't know which of those two cases it is, but I suspect it's the second one. I would be surprised if by a few years from now we haven't seen someone try to escape legal liability by pointing their finger at an AI system they claim is autonomously making the decisions that caused some harm.