Hacker News new | past | comments | ask | show | jobs | submit login

What I'd assume Valve is worried about is that it only takes one major decision against Stable Diffusion in court to suddenly leave us in a state where "this game used Stable Diffusion" is the proof that's needed.

Given the whole "Stable Diffusion reproduces the Getty Images watermark" lawsuit[1] that's still ongoing, it's not an idle concern.

[1]: https://www.theverge.com/2023/1/17/23558516/ai-art-copyright...




> What I'd assume Valve is worried about is that it only takes one major decision against Stable Diffusion in court to suddenly leave us in a state where "this game used Stable Diffusion" is the proof that's needed.

Hard to see how any plausible outcome that would have that result for users of SD (if model training isn’t fair use, that’s definitely a blanket-liability issue for Stability.AI — and Midjourney, and OpenAI, and lots of people training their own models, either from scratch or fine-tuning, using others' copyright-protected works.

But “using a tool that violates copyright in the workflow” is not itself infringement; whether and in what situations prompting SD to produce output makes the output a violation of copyright (and whose) would be a completely different decision, and while Ibcan certainly see cases (such as deliberately seeking to reproduce a particulaflr copyright-protected element, like a character, from the source data) where it might be (irrespective of the copyright status of the model itself), I haven't seen anyone propose a rule that could be applied (much less an argument that would justify it as likely) based on copyright law that gets you to “used SD, in violation”.

Lots of blanket ethical arguments about using it, but that’s a different domain than law.


> I haven't seen anyone propose a rule that could be applied (much less an argument that would justify it as likely) based on copyright law that gets you to “used SD, in violation”.

The speculative worst-case outcome for these tools that I've seen suggested is the legal system deciding that an image generated with them is a derivative work of every image that was used to train the model. Since none of these models were trained on images that they had rights releases for, this would mean they're incapable of outputting images that aren't infringing on the copyrights of vast numbers of people.

I can't say how likely that is to actually be the legal outcome, of course, but it seems like the sort of concern that might lead to Valve's policy here.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: