GW has put an immense amount of effort over time into reworking all their previously more generic marketing and lore elements into being more distinctly copyrightable and trademarkable. They're not going to let possible future lawsuits over LLM training data and the like screw that up.
And that’s probably the OpenAI killer. If any of my work product from now to 2030 could legitimately be entangled in any of the millions of coming copyright claims, I am in a world of hurt.
This fast run to use LLMs in everything can be undone by one court decision - and the sensible thing is to isolate as much as you can.
Also I don't think it will be easy to defend a copyright on AI-generated images, especially if your IP is 'lot of humanoid soldiers in power armor' and not specific characters.
> If any of my work product from now to 2030 could legitimately be entangled in any of the millions of coming copyright claims, I am in a world of hurt.
right... there has been ample code and visual art around to copy for decades, and people have, and they get away with it, and nothing bad happens, and where are the "millions of coming copyright claims" now?
i don't think what you are talking about has anything to do with killing openai, there's no one court decision that has to do with any of this stuff.
> there has been ample code and visual art around to copy for decades, and people have, and they get away with it, and nothing bad happens
Some genres of music make heavy use of 'samples' - tiny snippets of other recordings, often sub-5-seconds. Always a tiny fraction of the original piece, always chopped up, distorted and rearranged.
And yet sampling isn't fair use - the artists have to license every single sample individually. People who release successful records with unlicensed samples can get sued, and end up having to pay out for the samples that contributed to their successful record.
On the other hand, if an artist likes a drum break but instead of sampling it they pay another drummer to re-create it as closely as possible - that's 100% legal, no more copyright issue.
Hypothetically, one could imagine a world where the same logic applies to generative AI - that art generated by an AI trained on Studio Ghibli art is a derivative work the same way a song with unlicensed drum samples is.
I think it's extremely unlikely the US will go in that direction, simply because the likes of nvidia have so much money. But I can see why a cautious organisation might want to wait and see.
Indemnification only means something if the indemnifying party exists and is solvent. If copyright claims on training data got traction, it would be neither, so it doesn't matter if they provide this or not. They probably won't exist as a solvent entity in a couple years anyway, so even the question of whether the indemnification means anything will go away.