Hacker News new | past | comments | ask | show | jobs | submit login
[flagged] Valve is not willing to publish games with AI generated content anymore? (reddit.com)
622 points by Wouter33 on June 29, 2023 | hide | past | favorite | 395 comments



Why editorialize?

"Valve is not willing to publish games with AI generated content anymore"

Your title changes the meaning- they didn't ban games afaict.

It's also a misleading post, as it's specifically GenAI where authors can't prove or don't have rights to content.

If you use ProcGen etc or have full rights to the data used, I can't imagine there would be any issues.


> we cannot ship your game while it contains these AI-generated assets, unless you can affirmatively confirm that you own the rights to all of the IP used in the data set that trained the AI to create the assets in your game.

Yep, you are absolutely right.


This is exactly how this stuff should work. Greedily scraping whatever content you want hurts people who made that content and ultimately hurts AI developers who still need more high quality data.


I'm still not convinced by this argument. All human-generated art is also a result of the artist's experiences, and is strongly influenced by other art they have consumed. So why should GenAI not be allowed to blend the work of other artists?

If you want to argue that we should fundamentally treat machine- and human-generated works differently, that's fine -- but it's a different argument from "looking at a bunch of art and then synthesising ideas is bad," because that's exactly what many (most?) human artists do.


There's a pretty big fucking difference between the organic experience of a human being and a massive VC funded hellsystem that can process 400 million exact copies of images and generate thousands per day.

I honestly can't believe people are still making this dishonest, bad faith argument. It's obviously problematic if you think about it for more than 3 minutes.


If you want to talk about bad faith arguments, calling the AI a "hellsystem" is showing your bias just a bit.


I believe the expectation is that there is a difference between new creative work by humans and the output of tools. Tools are not 'artistically influenced' by their inputs.

Also, a human can take a work, modify it, and create a derivative work. They do not have copyright to the original material, and the degree of derivation is a winding blurry line through the court system to determine if they fully own the new work.

I suspect these to dominate the arguments for the first court cases around generative AI art - that the artist (operator) is the one who has to justify that they provided enough creativity in the process to create an independent work.


> you can affirmatively confirm that you own the rights to all of the IP used in the data set that trained the AI

i think valve is over reaching with this policy.

It should be that you need to prove the art used in the game itself that does not breach any copyright. The tool used to create said art has no bearing on the final art with regards to copyright.

Otherwise, would valve also have mentioned that the developer should also produce evidence of their photoshop license/subscription (if they used photoshop in the course of making their game)? Do they need to check that the version of windows being used to make the game is a licensed one?


> The tool used to create said art has no bearing on the final art with regards to copyright.

This is still an open question with regards to AI art generation tools. Do you have recent legal precedent to cite that I don't know about, or are you just making things up?


> This is still an open question with regards to AI art generation tools.

While it is an unanswered question with no legal precedence, i am a believer that what isn't currently illegal is and should be considered legal, until harm has been shown and thus require legal ruling.


Valve may be anticipating that it will become illegal, in which case they would have to pull games from the store and issue refunds. They may be waiting to see which way the wind blows on this before they get their revenue stream mixed up with it.


It would be so nice if they reflected that in their submission policy if that is the case.

But that requires Valve to be transparent to developers, and we know how well they are with that.


Do you have a legal decision that AI art forms no derivative of a copyrighted work, vs a derivative of ALL the copyrighted work in its training set?

There is no new law, there is ambiguity with the lack of current case law. There are certainly people who think generative art based on scraping the internet is infringing under current legal standards. We won't know until people go to court and judges make decisions.

Valve's insistence in the face of their own liability for selling derivative works seems a sound business decision here, at least when selling in markets where it is an open legal issue.


> it's specifically GenAI where authors can't prove or don't have rights to content.

Even more specifically, the author admitted to the images being "obviously AI generated" and Valve alleges that the images themselves in the game's initial submission contained copyrighted third-party content.


Thanks. The submitted title ("Valve bans games using AI generated graphics or text from Steam") broke the HN guidelines: "Please use the original title, unless it is misleading or linkbait; don't editorialize."

Even the original title seems questionable until properly substantiated, so I've reverted to it plus tacked on a question mark.


No bad intentions with the title, was just using "banned" since Valve used it in their response to the Reddit poster. Change it to whatever you think is better! :)


I haven't followed any of the details so you could well be right (in which case, sorry!)


Yeah this seems like the title should be edited. @dang (not sure how to get his attention)


@dang is a no-op. The only way to get reliable message delivery is to email hn@ycombinator.com. Fortunately someone did that. I'll take a look at the title situation now.


Alternatively, typing "@dang (not sure how to get his attention)" seemed to work. :)


It's not valves job to police game assets. If it were then they should be removing plenty of other games. They don't, because they don't look and don't care. This is yet again more AIphobia.

Under u.s. law you don't need to prove rights for content used in this way. It's transformative and does not replicate the heart of the originals work, therefore it's protected under fair use. AI generated content cannot be copyrighted, so they cannot claim ownership regardless.


This is correct, see:

https://www.artnews.com/art-news/news/ai-generator-art-text-...

The AI generated images are not protected by copyright because they are not authored by a human. Whether or not it is legal to train a model on copyrighted images is irrelevant for this. So it seems clear that Valve made the wrong call here.


> After reviewing, we have identified intellectual property in [Game Name Here] which appears to belongs to one or more third parties. In particular, [Game Name Here] contains art assets generated by artificial intelligence that appears to be relying on copyrighted material owned by third parties. As the legal ownership of such AI-generated art is unclear, we cannot ship your game while it contains these AI-generated assets, unless you can affirmatively confirm that you own the rights to all of the IP used in the data set that trained the AI to create the assets in your game.

Valve's worried that AI-generated art is in a murky copyright state, and don't want to open themselves up to being sued.


It's just a random reddit post. The OP on reddit didn't even post his game.

Three possiblilities:

1. It's just fictional. Probably written by a troll or generated by ChatGPT.

2. Steam refused to publish the game due to some obvious copyright issues (like they told Midjourney to generate superman or one-piece characters)

3. Steam is banning any AI generated assets.

My bet is 1 > 2 > 3.


It's a really strong indictment of the state of journalism that "I read it on reddit" has become sufficient to turn into a news story.

The internet is flooded with content right now to the effect of "Valve might be doing this thing", but not one of those sources has actually reached out to Valve for comment. Instead they all cite a random commenter on Reddit (or they cite each other).


What journalism? This is a link to reddit


https://news.google.com/search?q=valve

There are dozens of articles that are just summaries of this reddit thread with no further effort put into them, and that's pretty much the norm these days for a lot of content.


Soon they will be auto-generated with ChatGPT from reddit threads...


A lot were already automated prior to ChatGPT


Sure, but I like to think that I could pretty quickly recognize them. Much harder with GPT.


True dat


I see three relevant articles (the rest aren't relevant, e.g. talking about Steam summer sale), from sites that look more like content farms than legitimate news websites.. but yeah, there are too many content farms nowadays.


Ah very true, thanks



> Neither Valve nor potterharry97 were immediately available to respond to a request for comment.

I wanted to point out that journalists at least check their sources...


"Immediately available" could mean "we sent out the email right before we pressed submit". That's better than nothing, but it's still not journalism.


"Journalists" are like vultures now, they just run towards anything to publish as much as possible with absolutely zero regard for any journalistic ethic.

Might as well just replace journalists by AI at this point, to a large group of people (me included) they've all made themselves more hated and untrustworthy than a company, economist, politician or civil servant.


By and large, it's true. But Reuters still does some decent investigative journalism, imo.


I bet it's 2 because OP said they used "admittedly obviously AI generated from the hands" and a lot of AI generation makes really funky hands that you have to fix because it looks so bad.

So it sounds like OP slapped some half-assed generated images into a game and tried to submit it. Valve now can't really trust someone that does that to have done any due diligence.


The OP's only other reddit post says that they want to start a corporation to avoid associating their name with the game, which is a pretty big red flag to me.

"I've been developing a game for a while now, and am near ready to release it on Steam. I'd prefer it not to be associated with my name (as in I'd prefer people googling my name and future employers being unable to find out i developed this game )."


There’s a high likelihood that it’s probably pornographic or otherwise controversial.

Use copyrighted material in porn that you charge money for and you’ll get slapped with a lawsuit faster than you can load the home screen.


Depending on the genre, that may be a reasonable action. If I were to uncharitably assume OP submitted an adult game with rip-off characters, that would explain both OP & Valve's behaviors. If this transpired at all.


Given the tech involved and allegations I agree with the others that this is a smut game, but shielding yourself with an LLC is a smart move for anybody doing anything controversial, commercially.

Ask Alex Jones...


I wouldn't put my name out there for any piece of media I wanted to publish. People on the internet are insane.


ehh, I've thought about it too. It's almost recommended if you want to make adult games. Even if I don't care about employers who'd reject me for that stuff I don't want crazies harassing my family just because I wanted to make a game with penises in it.


>Steam refused to publish the game due to some obvious copyright issues (like they told Midjourney to generate superman or one-piece characters)

from post:

> contains art assets generated by artificial intelligence that appears to be relying on copyrighted material owned by third parties.

So I'm guessing 2


I'd be less sure about that, Valve has taken stances for reasons of public demand and/or their personal ethical standards (depending on what you believe) before, for example the ban on Blockchain games.


But not for Tyrone vs. Cops[0]

"When a game comes up as problematic, it gets flagged a bunch by steam users, and there's a meeting that takes place where they decide if these things stay on steam"[1].

0: https://store.steampowered.com/app/1853200/TYRONE_vs_COPS/

1: https://youtu.be/hDjxBrgtJXc?t=974


Well... Valve is a very interesting study in that regard. They have voted for violence, topics bordering to meme hate speech, pornography, but have voted against crypto shit and now AI.

Not making a verdict either way but I find it interesting. I'd like to know more about the internal discussion(s) that took place to establish their frameworks. Especially given the company is private.


For what it's worth, I think I would vote the same way, perhaps with the difference being against the hate speech depending on how bad it was.

Games with significant Crypto and AI art components bring significant risks to Valve in both legal and social contexts (99.9% of modern crypto-related projects are an intentional scam, and AI art is a legal minefield right now).

On the other hand, violence and pornography are much more accepted by society (in the context of fictional enterntainment).


I think it makes perfect sense that they care more about their users not falling for a crypto scam on their platform than the actual content of the games


This developer's next game was banned though: https://twitter.com/Team_SNEED/status/1651022411368628224. They're fairly inconsistent about it.


Valve's ban on blockchain games was more related to their cut of revenue and desire note to bolster systems that bypass it, probably along with KYC concerns converting steam wallet money into crypto.


I usually read HN comments before articles. Is a good filter. Seeing that article was just a Reddit post, I stopped reading comments at this one.


But it is just a random HN post!


I don't visit reddit anymore so I need to see more substantiated claims before I give this any thought.

Besides my already established biases towards AI: It's threatening to creative endeavors, not because it exists, but because it will impact the earning potential of creatives.


4. This is intentionally floated with nebulous veracity in order to gauge public reaction before making it official.

If so, the reaction I've seen is quite positive. Very unlikely though.


Sounds like a cop-out. They can't possibly verify that any content they're hosting isn't already copyrighted, let alone in a "murky copyright state"


They have a manual review step when you submit a game. Although you're right that they can't catch everything, they can certainly catch obvious things.

I'm sure they mostly just don't want to wind up in court with a lawyer being able to say that they let [blatant example here] get published on their store. So long as they can credibly claim that there was no way for them to tell something was in an objectionable category, I'd imagine they're fine with it.

Their rules, if you're curious: https://partner.steamgames.com/steamdirect


I doubt their manual review actually does much of anything. There are already tons of "games" that don't actually function that are just pre-built engine assets shoved together.


Those games aren't breaking any rules, why would them being allowed through imply the manual review "doesn't do much of anything"?


I wonder how automated their system is. They obviously wouldn't boot the game up and start walking around because they can just extract the media files and check. But I'm curious if there is a system that identifies copyrighted images/video stills and searches for copyrighted words.


> Sounds like a cop-out.

Sounds like due diligence.


And a sound business decision until the copyright law works itself out.


> Sounds like a cop-out.

Adhering to legal and copyright standards isn't a "cop-out"


And neither is choosing to act in a situation where the legality isn't clear.

I understand that OpenAI et al would like to assure all their investors and customers that there's nothing legally problematic with using an AI to launder away copyright infrigement, but we're going to need a few lawsuits to have the matter settled.


And valve leadership has made a decent decision of 'we don't want to be the ones being the defendants on what could be a costly and time consuming lawsuit for something they didn't make'


someone failed their ethics class in college....


And which standard is that for ai generated art ?


A lot of AI generated images retain the watermark of the copyright image it was trained on. If you sell something with that image with no agreement from the rights holder it is not fair use.

It is completely reasonable for Valve to forbid this until it is sorted out. Keep mind they are a company of IP creators, creating a marketplace for IP creators. The whole reason Steam was created was to establish a DRM that fought the piracy of Half Life. I am on the side of Valve in this.


I believe the AI generates a watermark because so many examples contained it.

Imagine taking a really dumb gig worker, showing him 10000 images, some of them with watermarks, and then telling him "draw a red car, kinda like the kind of images you saw". There's a decent chance you'll get a red car that looks nothing like any cars with the data set (original work), and yet he'll paint a memorable watermark on top because so many examples contained it, you said "kinda like the kind of images you saw", and he doesn't understand that the watermark isn't meant to be part of the picture. I believe that's whats happening.


They don’t ‘retain’ a watermark. They ‘reproduce’ the watermark.

It’s entirely possible for a diffusion model to produce an original work and yet still hallucinate a ‘shutterstock’ watermark onto it, in much the same way as GPT can hallucinate valid-looking citations for legal cases that never happened.


That's a completely fair point a few people have made. But I think the idea is, if you are a creator, the AI is doing something that you might call copying if it were a person. If it does that, what is your redress as the creator?


To correct the common misconception: Sometimes AI image generators insert a watermark because they have seen a lot of watermarks on certain kinds of images during training. This does not mean that the image itself is a copy of any particular image in the training data.

Producing (distorted) copies of images in the training data takes some real effort, and typically only occurs for images which are heavily repeated in the training data... Most of the complaints along these lines can be compared to complaints that cars cause massive bodily harm if you steer them into lightposts: The problem is easily preventable by not driving into a lightpost.


I think the "well it's transformative" argument is pretty bad faith and I think a lot of the people making it might know that.

Generative AI cannot exist without pre-existing bodies of work created by human labor. It also displaces that labor and hurts the people whose content was a requirement for AI to exist. From this view, AI is not fair use.


There are multiple jurisdictions where there have been rumblings that an AI-generated work is possibly a derived work from every single work that the AI was trained with. This hasn't been properly tested in court, but I would give very high odds that the standard will be upheld at least somewhere where Steam sells things.

If this is true, then ordinary copyright law means that AI-generated media cannot be used unless you have a release from every bit of training data you used. At least some of the currently existing AI:s were trained with datasets for which such releases are impossible, so they should not be used.

Also, for the love of god, do not use any of the AI coding assistants, or if you do, at least never publicly admit you do.


> multiple jurisdictions where there have been rumblings that an AI-generated work is possibly a derived work from every single work that the AI was trained with

This should apply to humans as well then because brains ultimately do the exact same thing. Nobody creates art in a vaccuum.


Good


[flagged]


Sure— the method of making the image, such as being AI generated, is entirely irrelevant in terms of IP enforcement. You could cut a cross-section from a log that had coincidentally formed the Nike symbol with it's rings, and if you slapped a picture of it on your line of sportsware, you better believe you're going to get owned.

But if they see an increased risk of IP violations from AI generated assets— and given the Getty red carpet debacle that's entirely reasonable— banning it will probably save them a whole lot of money on manual game reviews.


The Nike example is trademark rights, not copyright.

If you give a worker 5 examples of cars, and tell him "draw me a new car in this style", and he does so (from memory without clearly copying any individual example), it's unlikely to be a copyright or other IP violation.


Ok, fine... an image of Goofy. Both are judged on how similar they are, not the tools with which they were made.

https://en.wikipedia.org/wiki/Substantial_similarity

> Judge John M. Walker, Jr. of the U.S. Court of Appeals for the Second Circuit noted in Arica v. Palmer that a court may find copyright infringement under the doctrine of "comprehensive non-literal similarity" if "the pattern or sequence of the two works is similar".

You're going to have a much harder time proving that you absolutely did not copy something if you had an image of what you're being accused of copying in the dataset you used to make it. If the images are deemed substantially similar, it will be deemed an infringement.


> If you give a worker 5 examples of cars, and tell him "draw me a new car in this style", and he does so

Yeah that's great, but it actually has nothing to do with how the AI works. A worker learning through observation about 5 cars is hugely different situation than an AI company scraping 400 million often copyrighted images onto their servers to run through a training algorithm to create a for-profit system that displaces the people who produced the original images.


INAL. But... show me the case law establishing there is near-zero risk to them if they let it go through.

People make business decisions all the time to avoid murky areas that may hold peril. Unless there is a big benefit to them, why take the risk?


There is no case law. Literally anybody saying that things related to this are illegal, are making up Boogeymen.

Yes, businesses act like this, no shit.

Doesn't mean we shouldn't call them out on it because it's cowardly behavior.


Being cautious when not being cautious could mean lots of big lawsuits against you doesn't seem that ultra-super conservative. I hope this ends up going the other way, but I understand Valve's calculus here.


regardless of legality: the odds are games with AI generated materials are going to be much lower quality

(shovelware)


Made by people with none of the skills or drive to make games.


> https://www.artnews.com/art-news/news/ai-generator-art-text-...

US Copyright Office has stated unequivocally that AI works cannot be copyrighted, or otherwise protected legally.

The US patent office is studying the effects of AI on the patent system and asking citizens and businesses for comment.

If that’s not enough for you, I don’t know what would be.


That's meaningfully different. "Can't be copyrighted" doesn't mean "can't be sold", or "someone else owns the copyright". It just means someone can copy and resell the generated portions without payment/licensing.


That sounds like a fucking nightmare if you're running a marketplace for what is essentially intellectual property.


I'm not sure. I'm not an expert, but it doesn't seem that different from including public domain text and art in your game.

I assume that, if it is true that Valve isn't allowing games with generated images, it's because (they feel) the legal status could change, not because of the current status.


Yes that's exactly what I'm saying lol.

There's also a quality argument. If Valve lets a bunch of slapdash AI hackjobs onto the store that were developed in a week by people who don't know anything about game development, and that makes it harder to discover well made games, that's a meaningful business risk for them. They're responsible for curating the steam store.


That is a shallow regurgitation of their opinion that has been repeated out of context in headlines, but it misses their point. The Copyright Office's opinion can be better summed up as:

1. Copyright protects work that humans create

2. Humans sometimes use tools to create their works, that is okay

3. Y'all make up your mind whether your AI is some sentient being or whether it's just a tool. We're just lawyers.

If the wind blows and your typewriter falls off a shelf and writes a novel, it isn't subject to copyright either. That doesn't mean that all works written using a typewriter aren't subject to copyright. It means a human must be part of the creative process.


But what if the wind blows, and my laptop falls off a shelf and writes the source code for windows 95, but reindented, with some implementation details and variable names changed?

It’s pretty clear that the “neural networks are just a tool” ruling is going to have to be revisited eventually (and probably soon).


> But what if the wind blows, and my laptop falls off a shelf and writes the source code for windows 95, but reindented, with some implementation details and variable names changed?

Simple. If it wasn't created by a human, it's not eligible for copyright. The law is quite clear about this.

Microsoft gets the copyright to Windows 95 because they wrote it with humans. You wouldn't get it because you didn't write it. Your laptop wouldn't get it because it isn't a human.

> It’s pretty clear that the “neural networks are just a tool” ruling

I think you misinterpreted the above. There is no "“neural networks are just a tool” ruling".

The copyright office never said neural networks were or were not a tool.

They said if a human makes a creative work, and they happen to use use a tool, then it is eligible for copyright. As it always has been.

All they said is what every lawyer already knows, which is that a work has to have an element of human creativity in order to be eligible for copyright.


But, if my laptop’s implementation of windows 95 is not eligible for copyright protection, then I can freely redistribute it because no one can use copyright law to stop me, in a runaround of Microsoft’s copyright on windows 95 (which the laptop generated version is clearly a derivative of).

This is exactly the ambiguity Valve is concerned about.


But the hypothetical world in which your laptop falls off a shelf and randomly writes Windows 95 is a fake one.

LLMs aren't random number generators running in isolation.

They're trained on copyrighted material. If they regurgitate copyrighted material, we know where it came from. It came from the training material.

Valve is rightly concerned that non-lawyers have no clue what they're getting themselves into when using the current generation of AI models. The inability to determine whether an output is a substantial copy of an input is not a free pass to do whatever you want with it, it's a copyright infringement roulette.

There are way too many people in this industry who believe that building a technology which makes compliance impossible is the same thing as making compliance unnecessary.


> US Copyright Office has stated unequivocally that AI works cannot be copyrighted, or otherwise protected legally.

The “or otherwise legally protected" piece is outright falss (and would be out of their scope of competence if true), the other part is true but potentially misleading (a work cannot be protected to the extent that AI, and not the human user, “determines the expressive elements of the work”, but a work made with some use of AI where the human user does that can be protected to the extent of the human contribution.)

The duty to disclose elements that are created by generative AI in the same guidance is going to prove unworkable, too, as generative AI is increasingly embedded into toolchains with other features and not sharply distinguished, and with nontrivial workflows.


That's surprising. Do you know if their definition of 'AI' includes things like generative fill in Photoshop?


> ultra super conservatively cautious.

This has nothing to do with politics.

This has everything to do with CYA, the issue is AI trained with copyrighted material is a huge gray area and they don’t want to be in the gray area. That’s rational and has zero to do with “conservative”.

This is likely not set in stone and after the copyright laws and courts catch up and decide what to do, Valve will likely go back and update their policies accordingly.


>> ultra super conservatively cautious.

> This has nothing to do with politics.

> This has everything to do with CYA, the issue is AI trained with copyrighted material is a huge gray area and they don’t want to be in the gray area. That’s rational and has zero to do with “conservative”.

The word "conservative" isn't a political word in all (or even I would have thought in most) contexts: it's normal meaning is similar to "chosen so as to be careful". For example, a "conservative estimate" isn't "an estimate that leans to the right of the political spectrum": it is an estimate which has been padded out in the direction you are more likely wrong.

When someone says they are being "ultra super conservatively cautious" they are merely being super extra extra doubly-cautious, as we are stacking similar adjectives (as one might could do with something else such as "carefully"). So, wanting to avoid being in a gray area is dead center to being "conservative" in one's curation or legal strategy.


> This has nothing to do with politics.

Please tell what is, in your opinion, a conservative garbage collector without looking on google.


What are they copping out of? What are you suggesting their real motivation is?


I'll suggest a real motivation: they want to make their own game generated by AI and have first mover advantage while they work out all the scary AI copyright issues they have to deal with that they already deal with because the same problem exists with human generated art.

Why do I say that? They want the developer to prove they only used material they created to do the training and Valve has the resources to follow that rule unlike the rest of us.


I don't think valve wanting to make a game is a realistic argument for anything in 2023. I'd believe any other reason than that lol.


> They want the developer to prove they only used material

No, they only want the developer to "affirmatively confirm" it. It doesn't say anything about Valve demanding some sort of proof.


yea, and the game is half-life 3


Well, they verify. Think again or say something that isn't just "I know something I made up about them that I didn't look up".


Risk management. AI generated content has high likelihood of copyright infringement.


funny, a while ago MSFT said Copilot was not stealing code, it was merely reading it...


Microsoft wants to leverage LLMs to expand their influence in the software development market. For them, Copilot is both revenue source and a moat, so it behooves them to claim that these models don't constitute copyright infringement. But there's no business benefit to Valve in allowing AI-generated art assets on Steam, and a small (though nonzero) amount of risk.


The best case scenario for Microsoft would be supplying the world with programming tools far ahead of all others (no idea, haven't tried any of that stuff), while maybe not getting sued to bits. The best case scenario for Valve would be not getting sued to bits while getting even more spammed by low-effort money grab attempts that hope to luck into virality than they already are.

At first approximation, yeah, the risk of getting sued to bits might be roughly the same. But the upside is not.


I'd say that too if I was Microsoft, but that doesn't make it true.


And Microsoft isn't the government. So I see no bearing on the actual issue at hand, which is valve protecting it's own ass from lawsuits that are in the realm of murk at best.


It isn’t a settled legal issue yet. It could be that Valve and Microsoft are responding to different incentives, because they have different business models. But it could also just be that their lawyers have different legal opinions.


It's incorrect to say that Valve can't verify any content they are hosting is copyrighted.

They are obviously able to identify some copyright material.


Why would Valve be on the hook for copyright? If a particular game developer happens to get sued (which happens regardless of AI), all Valve has to do is remove the game from Steam.

Assuming this is even real, it may have more to do with preventing another 1989 video game crash resulting from the market being overwhelmed with crappy games.

Then again, most AAA games today are broken pieces of suck, so IDK.


If Valve takes a % of sales from a game that is full of copyright infringement, they can be sued for their cut from the sales plus maybe more and they need to pay the lawyers.

They also care about their reputation amongst content-producers (game makers). Youtube faced this exact dynamic back in the day and have found it better to side with the large creators who care very much about protecting IP rights and so they exercise a heavy hand against copyrighted material.


"all Valve has to do is remove the game from Steam."

And take the reputation hit that would go along with that. Valve's business is 30% technology and 70% the reputation of being much less untrustworthy than the alternatives. If they lose that they can close shop.


Valve's business is its user base, as long as it has that I think they will survive.


How are reputation and user base not the same?


99.9999% of user base won't care about a few game Valve removed from its store. And they will care even less if they decide to not publish a game at all like they did in this instance. Valve is not really losing any reputation among users for not releasing some games

Valve might lose some reputation among the devs if they keep rejecting or removing games but that is far from reality, at least for now. Things perhaps might change if game studios starts picking up AI generated stuff more and more and Valve decides to blanket ban them. But even then the users of Steam is a juicy target for devs and it is a hard market to ignore.


Man, getty images is sitting on a goldmine opportunity to get into AI here. They have enough images to train something quite useful!


AI is already trained on Getty's image set... it's why people have to exclude watermarks from their prompts.


Sure, but that doesn't relate in any way to the legal problem that this post is about.


Well, that was the entire point of banning AI generated content, right? Or, are you implying that the article states different reasons for the ban?


> that was the entire point of banning AI generated content, right?

But AI generated content is NOT banned. You just have to prove you have the copyright (or permission) for the training data.


Which honestly sounds rather reasonable and should be the expectation of all current AI products.

The only reason it wouldn't be easy enough to provide is if you just scraped any available data set with a complete disregard for intellectual property.


Humans take in inputs and transform what they learn into distinct outputs. Training data is essentially just a machine doing the same thing. Knowingly scraping pirated material would be one thing, but essentially having the machine view publicly-available material is not clearly at odds with existing IP law.


And if your output isn't distinct enough from the inputs, you too aren't allowed to claim copyright or sell your work without proper licensing.

With the AI we can at least be 100% certain of which input you trained it on and under which licens, making the whole a lot easier to deal with, as compared to humans. The liability is the same, but it's much easier to avoid legal implications, so why not play ball and ensure that you have the correct licenses?


Have what correct licenses?

It's not clear that you need any license to train on data in the vast majority of cases. Having a license to train on it won't guarantee that you can grant your users a license to any particular output, especially given the addition of user input. And most of the utility is in creating outputs that are indeed distinct.

So the answer to "why not play ball?" is: 1. It's not clear that it's legally required 2. It would be incredibly expensive and/or slow progress dramatically, or limit you to pre-existing licensed content (e.g. Adobe) which drastically reduces some types of capabilities 3. Given #1, for any company that doesn't have an Adobe-style library, "playing ball" is essentially betting the company that it will become legally required, because on top of developing an AI model you're going to have to become an expert content licensing and documentation studio


What humans do when they learn is a completely separate issue and has absolutely nothing to do with how this technology is regulated.


It's the most analogous process we have; the courts may not end up treating them the same, but it's unlikely they'll say it has nothing to do with how AI gets regulated.


I'm quoting the relevant bit from Valve's email in the reddit post, to indicate exactly what they're actually forbidding.

Notably: not all AI-generated content, but rather AI-generated content from models that were trained on material that's not owned by the person submitting the game.


Can that even be proven/tested?


They can ask you to say that you own all the relevant rights, and then if it turns out not to be true later they can say they don't know.

Which seems like as much as you can hope for in a policy?


But how does this work for YouTube for example? People still upload copyrighted stuff, even if the ToS says you shouldn't.


The key is that it protects YouTube (and Valve in this case) since they can say “we weren’t allowing our users to upload copyrighted content but they snuck it in”


YouTube has to comply with things like DMCA takedowns.


So, shouldn't Steam too?


Yes and they do.


It just has to be enough that Valves legal team can claim that they approved the game in good faith and that they can be held responsible for a game developer lying and knowingly violating the terms of service.


Most sd guis save the generational data in the final pngs as default. The metadata includes the model hash. The model can be easily checked against a data from civit or hugging face.


Depending on the court, burdens of proof can vary a lot...


In court, it would be the entity claiming infringement that would have the burden of proof that their exclusive rights under copyright were violated by the assets in question, not the distributor of the asset that had the burden of showing ownership of every item in the training set of the model used in some part of the workflow.


What I'd assume Valve is worried about is that it only takes one major decision against Stable Diffusion in court to suddenly leave us in a state where "this game used Stable Diffusion" is the proof that's needed.

Given the whole "Stable Diffusion reproduces the Getty Images watermark" lawsuit[1] that's still ongoing, it's not an idle concern.

[1]: https://www.theverge.com/2023/1/17/23558516/ai-art-copyright...


> What I'd assume Valve is worried about is that it only takes one major decision against Stable Diffusion in court to suddenly leave us in a state where "this game used Stable Diffusion" is the proof that's needed.

Hard to see how any plausible outcome that would have that result for users of SD (if model training isn’t fair use, that’s definitely a blanket-liability issue for Stability.AI — and Midjourney, and OpenAI, and lots of people training their own models, either from scratch or fine-tuning, using others' copyright-protected works.

But “using a tool that violates copyright in the workflow” is not itself infringement; whether and in what situations prompting SD to produce output makes the output a violation of copyright (and whose) would be a completely different decision, and while Ibcan certainly see cases (such as deliberately seeking to reproduce a particulaflr copyright-protected element, like a character, from the source data) where it might be (irrespective of the copyright status of the model itself), I haven't seen anyone propose a rule that could be applied (much less an argument that would justify it as likely) based on copyright law that gets you to “used SD, in violation”.

Lots of blanket ethical arguments about using it, but that’s a different domain than law.


> I haven't seen anyone propose a rule that could be applied (much less an argument that would justify it as likely) based on copyright law that gets you to “used SD, in violation”.

The speculative worst-case outcome for these tools that I've seen suggested is the legal system deciding that an image generated with them is a derivative work of every image that was used to train the model. Since none of these models were trained on images that they had rights releases for, this would mean they're incapable of outputting images that aren't infringing on the copyrights of vast numbers of people.

I can't say how likely that is to actually be the legal outcome, of course, but it seems like the sort of concern that might lead to Valve's policy here.


I may be wrong, but I believe they were just intending to summarize.


Wouldn't the DMCA safe harbor apply, it's user submitted content?


This is a bullshit argument. There is zero liability for Valve here. They are a publisher, and are fully protected by the DMCA. Have they received a single takedown request for AI generated art? Why are they judging it to be possible infringement then?


https://www.law.cornell.edu/wex/contributory_infringement

The Copyright Act does not expressly impose liability for contributory infringement. According to the U.S. Supreme Court, the "absence of such express language in the copyright statute does not preclude the imposition of liability for copyright infringements on certain parties who have not themselves engaged in the infringing activity.

One who knowingly induces, causes or materially contributes to copyright infringement, by another but who has not committed or participated in the infringing acts themselves, may be held liable as a contributory infringer if they had knowledge, or reason to know, of the infringement. See, e.g., Metro-Goldwyn-Mayer Studios Inc. v. Grokster, Ltd., 545 U.S. 913 (2005); Sony Corp. v. Universal City Studios, Inc., 464 U.S. 417 (1984).

IANAL. Considering Valve not only gives games a retail platform, has to approve games before sale, and takes a cut of that sale and assuming the reddit post isn't a lie then I am gonna guess Valve's probably well staffed legal dept decided not to take a seemingly iffy legal gamble on a game that probably wasn't going to rake in a ton of sales anyway.


Consider for just a moment that they have almost certainly thought about this much more deeply and thoughtfully than you have


I agree with your sentiment but this is a poor way to dismiss an argument.


"No big company can ever do a bad thing because I'm sure they've thought deeply and thoughtfully about it."


Actually no, me pointing out that “a literal anon in their armchair probably hasn’t thought through a challenging legal problem as thoroughly as the billion dollar company making the decision has” is not actually the same as saying that “every big company is infallible” but nice try


That may be true but that won't stop people from trying to sue undoubtedly - clearly they've decided the effort and legal fees required to deal with that aren't worth it.


This just creates a moral hazard to not disclose the use of any AI-generated assets, which is something other creative industries have already learnt the hard way.

Recent text-to-image models have improved enough such that it's possible to get realistic, not-Midjourney-dreaminess in the generations with a modicium of effort, so banning obviously-AI-generated images is shortsighted and unsustainable.


> This just creates a moral hazard to not disclose the use of any AI-generated assets

The whole space is somewhat amusing to me. what is the bigger moral hazard: Openly disclosing everything about your content pipeline and getting your team's efforts shitcanned, or keeping everything private unless a court order shows up?


No one is being protected from consequences of risky behavior. Moral hazard doesn't apply.


So, the OpenAI model?


It has been widely speculated that the primary reason OpenAI never disclosed the full training dataset for GPT-3 or GPT-4 was to avoid potential legal backlash.


I prefer to think of it as the Uber/AirBnB model. Just do illegal things so much that you clog the enforcement mechanisms. Then it becomes such an unreasonable burden that they change the laws in your favor.


Classic VC bullshit.


And GitHub Copilot


That's OpenAI.


That strategy didn't work well for the submitter. Steam is rather stingy in allowing resubmissions, so the better strategy is to make your best effort to comply with the terms on the first try.


Also just creates an incentive to just lie about what AI model / training data you used , how could anyone possibly prove your deception?

What would an satisfy an audit trail that no tainted AI data have made it into a digital image? It would involve a chain of attribution per fraction of a pixel through all it's past iterations.


Valve isn't worried about you lying they just want your attestation so if someone tries to sue them they can redirect them to you and if they think it's worthwhile sue you themselves for lying and breaching the contract. In the same way they want you to attest to owning rights to all the IP in the products you put on their platform. It's just IP ownership around AI content is murky right now so gets treated as a special case.


> It would involve a chain of attribution per fraction of a pixel through all it's past iterations.

Which wouldn't be sufficient since, as stated many times before, the diffusion process most text-to-image AIs use is not collaging.


Shortsighted, unsustainable... but still the best thing for the company to do meanwhile in a very uncertain situation ?


> This just creates a moral hazard to not disclose the use of any AI-generated assets, which is something other creative industries have already learnt the hard way.

This has been the legal street smarts for a while and doesn't seem like a big development to me. As usual, you don't admit or allude to anything. It's like when I'd write code and say no I've never even visited StackOverflow.


Midjourney has been really helpful to me as a one man dev. I can mockup art much faster then what I can do in photoshop. I still intend to at some point do a complete pass using a professional artist (or learn to draw myself) - because generative art is not consistent thematically from asset to asset. But if I just want to see what my tile assets look like if they were all done in 30's art deco style, I can do it in 20 minutes.

As placeholders or to create little bits and doodles (like a mouse cursor in the style of an armored fist), there are lots of little graphical icons in a game that would other have to be created by a graphical artist. Generative art is really useful in my experience.

It's reduced the work to the point where I can toy with it in my off time and spend most of my effort in the actual programming and development.

The other idea I have toyed with, coming from professional ML experience - was to build my own generative model and use it to create my own art assets. Here I wonder how the copyright rules would work - would the assets I train on be subjected to copyright? This is a much bigger conversation at that point and I wont be the only one affected.


And no one is trying to take that away from you. You are using it as a tool, and intend to pay someone to create the final version, or do it yourself.

The issue people have is when you just use a dataset trained on someone else's work and pass it off as your own, and in the case of Steam games, most likely profit from it.


What if I look at other people's art and learn from it? Seem unfair to pass that work off as my own. All artists should be banned from looking at copyrighted images, we can't risk them incorporating copyrighted elements into their own work. /s


That’s not a good analogy and you should know it.


It’s a terrific analogy. The alternative is to believe that a 5GB model somehow contains a database of 160 million images.


It's a fine analogy, but the map is not the territory. Machine learning is not human learning, even if it works in a vaguely comparable way.

It's still a computer program that uses an enormous amount of copyrighted work as its input.


"With four parameters I can fit an elephant, and with five I can make him wiggle his trunk”

It seems like you could calculate how much data is within X% error of a 5GB model, and what X% should be for 'visual data'.

I bet it's pretty big.


Yeah I don't see too much issue in using generative art for more trivial things, like some banner art to sit atop a blogpost or something. Placeholders also seem like a really good application, particularly for cases where the randomization might expose issues that real users would face. I wouldn't have spent money on these things anyway.

Direct incorporation of generative art into a commercial product is much more murky.


"The other idea I have toyed with, coming from professional ML experience - was to build my own generative model and use it to create my own art assets. Here I wonder how the copyright rules would work - would the assets I train on be subjected to copyright?"

Unless you're training on assets available in the public domain, any generated output from your "custom model" would have the same potential copyright issues as midjourney, stable diffusion, etc. What exactly are you confused about?


I think programmers also often confuse "public domain" with "publicly accessible on the internet"


We're on the cusp of a profound content shovelware crisis. It will happen regardless, but any oasis of real content will become important.

Automation will be a force multiplier for laziness and predation more so than for creativity.


> We're on the cusp of a profound content shovelware crisis.

No, we've been inundated with low-quality content for decades now. This is nothing new.

Fortunately, ratings and reviews and popularity have always been an extremely effective antidote.

Even if 99% of stuff on a platform is total crap, nobody cares. It's a non-issue. Whether you're talking about music, books, TV shows, or whatever. The 1% rises to the top and you don't honestly need to pay attention to the rest.

If you choose to pay $20 for something that has 3 reviews that are all 1-star, then that's more your problem than the system's problem.


>No, we've been inundated with low-quality content for decades now. This is nothing new.

Trends matter too. If it gets to be an order of magnitude cheaper to shill your products on social media and forums, set up seo crap articles to phish users from search results, or churn out good old fashioned email scams, then expect an order of magnitude worse signal to noise ratio on the internet as a result. There could be a point reached where the internet is functionally broken, with the signal to noise ratio too low to make it useful for anything, save for navigating directly to known good hosts that themselves will become increasingly more lucrative targets for enshittification.


Yeah that happened 10 years ago


> No, we've been inundated with low-quality content for decades now. This is nothing new.

The problem with this argument is that scale matters.

When a small number of people were trading mp3 files on FTP servers it was not seen as a problem. When Napster came out, it was seen as a problem. It was correctly seen as a qualitative shift in the effect it would have on society.


I have to agree reviews work very well. But it distresses me that, knowing the absolute tsunami of garbage that awaits us without it, we are SO laissez faire when it comes to protecting that system. We allow companies to game reviews with kick backs, we lets spammers in posting fake reviews. This is currently our one wall of defense and cracks in it should terrify us.


We are also on the cusp of individual developers being able to produce works that used to take entire teams.

I'm working on simulating a small town using Generative AI agents, schedules, social interactions, realistic reactions to outside events, dialogue between characters, the whole shebang.

A year ago that wasn't an "after work side project".

I just did a full launch on https://www.generativestorytelling.ai/ - a side project that was only possible because of AI help. Between art assets and also coding in brand new areas that I hadn't used before, AIs are an obscene boost to what individuals can do.

The price and complexity of software development projects has been increasing for years now, AI is a huge reset on the amount of effort needed to make stuff.


> AIs are an obscene boost to what individuals can do.

Depends what the individual is trying to do. Making memes? Blog spam? Sure. But for non-trivial content I haven’t seen an example that was compelling.


> But for non-trivial content I haven’t seen an example that was compelling.

ChatGPT helped me write my websocket code, I'd never used websockets before and it saved me hours (of not more) of time learning a new API.

I had a concurrency bug in some code, I threw it at GPT4 and asked it what the problem was, a few seconds later it split out a solution.

I had some complex state management code: "Hi this code has an off by 1 error in it, can you find it?"

"My page demonstrates this visual problem, here is the CSS file for the page, what is wrong?"

The color picker component used on my above linked site was 80% written by ChatGPT4.

AI is a huge productivity booster.

Heck I use it to steel man opposing views of my blog posts to try and make sure I have sound arguments.


I wouldn’t be able to trust running generative code that I don’t personally have context for, or hasn’t been reviewed by an experienced peer; even if it seems to work as intended. It’s the kind of irresponsible OceanGate mentality. As for blog posts, “better than nothing” doesn’t automatically make content “compelling”.


In this context: Generative ML models allow e.g. a single motivated writer with almost no budget to make a Visual Novel which they then could publish on Steam (before the policy change) for the world to see.

Write the script yourself, generate and curate 2D art assets, optionally generate and curate your OST / BGM, optionally generate and curate voice lines, put everything into Ren'Py. Done.

It's still very much not easy if you do not want to make shovelware, but it's possible now for a sole developer (or very small team) with no great artistic and musical talent.


> but it's possible now for a sole developer (or very small team) with no great artistic and musical talent.

Better than nothing and compelling are different things.


I think game art for programmers or people with a game design idea but no visual arts chops is a very fitting use. As would be generating text for such a use, like dialog for an NPC.


> I think game art for programmers or people with a game design idea but no visual arts chops is a very fitting use. As would be generating text for such a use, like dialog for an NPC.

People aren't nearly creative enough with this.

You can use generative AI to give you a complete schedule for every NPC in the town. You can use generative AI to determine what the relationships between people in the town are.

I am working on using generative AI to simulate an entire small town.


There’s a long way to go from “no artistic chops” to “compelling art” and although this is subjective, I haven’t seen AI getting there.


I'm not an expert or especially well versed, but I feel like I've seen things from midjourney that could pass for semi-professional art, the kind that might be sold at a small time park exhibition on a sunny afternoon.

And with somebody super focused on figuring out the right prompts, maybe tweaking the images slightly, I think it could be workable in a game for sprites, textures, backgrounds.


We're on the cusp of low skill know nothings flooding the internet with low quality trash. AI produced content will not have value because you don't add anything meaningful to that process, you're entirely subordinated to the quirks of whatever models you're using. How do you compete in a sea of games that look exactly like yours because you're all using the same garbage to build the aesthetic value of your game?


>We're on the cusp of a profound content shovelware crisis.

Everywhere. Games, porn, text, articles, music, etc. For the generation that grew up with the internet already existing, this is their epoch moment, lives pre and post generative AIs.

I was thinking the other day that original artworks are going to be far more valued with a glut of AI generated.

Thinking slightly ahead, you can find your absolute favorite artist, and in seconds use their style you love so much to make the family portrait you would never be able to commission them to do. But going forward even more…

It’s just not the same as a print right? Well, thanks to AI being able to learn and determine where brush strokes would land, we take advancements from 3D printers and your desktop painting rig picks up a brush and paints it just as the artist would have.

Then going forward even more.. the artist himself needs some cash and knocks out 100 of these customs while they sleep, signs them, and now they are originals, sort of.

So… verifiable originals are going to be the hot thing. A painting with a video of the artist painting it… but not an AI generated video of course!

Maybe the artist will have to print it on location while you watch.


> Maybe the artist will have to print it on location while you watch.

For the music industry, this happened already. Artists makes a lot more money from live performances than from album sales or streaming fees.


That’s a good analogy. And it’s right.


You must have never seen the original Bladerunner if you found this line of thinking as original...

"you think I'd be working in a place like this if I could afford a real snake?!?"


To me that implies that real animals are scarce; the price is not related to any kind of "artistic realness."

If they said that hand-crafted robotic snakes were more expensive than run-of-the-mill live-bred snakes, that would support your point about artistic realness.


> Well, thanks to AI being able to learn and determine where brush strokes would land, we take advancements from 3D printers and your desktop painting rig picks up a brush and paints it just as the artist would have.

Good luck convincing enough people in the art community to give you data on their process so you can do this. There's definitely enough out there in things like PSD files but the AI community has been so rude and antagonistic to art communities that most of them have a kneejerk hate reaction on any mention of the technology, and rightfully so. AI users and companies have been gleefully abusing artists from day 1.


Why would you assume it would need to be given? If you examine a painting with enough care, you could work out a few brush strokes. Now hand that to an AI that can process everything and come up with a style and how to paint it.

If you are making a thing, there is evidence of how it was made. Patten match enough of it and you’re done.


All right, so you're building a robot to do art forgery.


This has been an issue on Steam for ages, people call them "asset flip" games, because someone can buy a few $10 asset packs and piece together a game out of them.

AI generated content is not meaningfully better or worse than these low effort games, though taking the time to generate passable content with AI is probably a lot more effort than just using $50 worth of assets that are already packaged up for unity.


Massive amounts of shovelware on Steam is most definitely not a new phenomenon.


Amazon is more like eBay than eBay now

Steam is like eBay but for low quality games

It happens to them all. There are big games on steam, but 95% of the stuff on there is low value, low cost content


Steam has systems in place to bubble high quality work to the top.


Asset store is still a more effective tool to create shovelware than AI. It's been this way for years. Recommendation systems (digital or otherwise) are already coping with it.


I'd wait for more information before making any assumptions about what Valve is doing here. So often these stories here are lacking context due to only one side trying to paint the situation in a very one-sided light.


Agreed. The link here here is to a Reddit post from an relatively unknown person claiming to be quoting a private email from Valve. Kotaku has a bit more reporting, including a second report from a developer on Reddit. Also some comments on skepticism. https://kotaku.com/valve-ai-art-generator-steam-crypto-ban-m...

I'm pretty sure if this is Valve policy they'll have no trouble saying so publicly. I miss the old days of journalism where someone made an effort to get the story correct including responses from the named parties.


To be fair there are still examples of quality journalism, it's just that the internet doesn't care as much for that content, as it doesn't generate the outrage present in this thread. Unfortunately the incentives are aligned with ad revenue instead of accuracy.


Valve made an official statement, which PC Gamer (and others) have reported on. https://www.pcgamer.com/valve-is-scrutinizing-games-with-ai-...

> In its statement to PC Gamer, Valve said that "The introduction of AI can sometimes make it harder to show a developer has sufficient rights in using AI to create assets, including images, text, and music. In particular, there is some legal uncertainty relating to data used to train AI models. It is the developer's responsibility to make sure they have the appropriate rights to ship their game.

> We know it is a constantly evolving tech, and our goal is not to discourage the use of it on Steam; instead, we're working through how to integrate it into our already-existing review policies. Stated plainly, our review process is a reflection of current copyright law and policies, not an added layer of our opinion. As these laws and policies evolve over time, so will our process."


Kotaku have proven themselves to be a garbage tabloid in their reporting of NFTs. I would never bother to read anything they had to post.


Valve is quite clear about their reasoning. Since AI models use all kind of sources for their training, they don't want those assets on their platform because they are afraid of copyright claims.


For all we know, the game in question had images clearly aping some licensed characters. We don't know how stringent the policies are without clarification or examples of art found infringing. How did Valve know that the art was AI-generated? Did the developer tell them or include it in their marketing materials? It's basically just reading tea leaves without that information.


It actually sounds like if you claim to have ownership of the training data you can still use AI generated assets. For most people this is a distinction without a difference however.


That seems very sensible to me, I hope other platforms follow this example


This is just legal cover until such time as its possible to enforce no child exploitation imagery, no copyright stuff, etc.

It doesn't matter if they are able to enforce it, Valve can use this policy as cover if they ever get sued.

Don't overthink the motivation. They will not even have a bulletproof way to detect AI imagery as it evolves every single day as an arm's race and detection is a full-time job. Even a FAANG or a state actor would need to dedicate team(s) to detection technology and still have false negatives.

The same sorts of things already happen for example on YouTube and Twitch, where types of content are against TOS or copyright but enforcement is sporadic and selective, smaller operations often fly under the radar of enforcement, bigger creators who are netting the org sufficient revenue will likely be able to get away with more, etc, the automated tools for detection are flawed.


> They will not even have a bulletproof way to detect AI imagery as it evolves every single day as an arm's race and detection is a full-time job. Even a FAANG or a state actor would need to dedicate team(s) to detection technology and still have false negatives.

Are people actually trying to detect AI-generated content? That would not only be pointless and futile; the threat of false positives would be enormously detrimental to anyone creating legitimate work.

It is such a ridiculously bad idea I'm dumbfounded that anyone smart would be trying to do it.


Yep. There is a cohort of #noaiart amateur-but-wants-to-be-professional artists on twitter who believe their mediocre talents would be paying their expenses if it only wasnt for that pesky ai imggen. (Ignoring that a vanishingly small proportion of imggen art is replacing commissioned art - most is art that would simply never have been made). Like a horde of locusts, they will randomly pile onto AI artwork with hate comments for a brief period of time before moving onto the next one.

People are 'offering their services' where you can DM them a link to an image and they'll eyeball it and tell you if its made by AI. Laughable hubris, if it wasn't for the inevitable ramifications of false positives.


> Like a horde of locusts

My sympathies will be with the people whose career aspirations were just demoted to hobby, not the person who clicked "regenerate" 5 times then decided to upload.


no there isnt


I have first hand experience with them.


To be fair to the so called unwashed masses, you seem pretty annoying yourself.


Were you trying to reply to someone else?


Nope.


Ah, looked at your profile, I see you're one of the locusts.


Sorry that I don't want all our cultural artifacts to be produced by an algorithm I guess.


You need to think about different contexts.

Think about security or trust and safety or anti-scam or anti-fraud.

AI generated image, video, and audio can be used to circumvent a lot of systems used in these domains. Many of these domains are for protecting users from being scammed, being impersonated, being tracked, etc.

Think about criminal court. Evidence may become impermissible if it can't be proven whether an image or video or audio document is a forgery or captured reality.

It's a bit flippant and absurd to insult the intelligence of people working on AI detection. I'd be a bit dumbfounded by someone dismissing an effort w/o spending time to think about why that effort may exist.


> Think about security or trust and safety or anti-scam or anti-fraud.

It doesn't matter what context I think about it in. It isn't going to work! And it will make things worse for everyone involved.

Hypothetically let's say we get to a point where everyone believes the detection is 100% accurate. Well that's all that means: everyone believes it. Meanwhile AI has just gotten better, and we're all more fooled than we were before. All we are really accomplishing is enhancing the training necessary for AI to elude detection.

And there will be an inherent bias toward false positives, because high detection rate will be the selling point. The truth is secondary, and there's no way to verify the results.


It does work. If you absolutely need to know if an image is AI generated, you can just have a central authority in the system watch the person draw the picture on a piece of paper. Or drive to your house and hand you the paper and pencil and watch you draw it in person.

There are workflows or system designs that absolutely can and will solve for human-verified creation, they just might be incredibly costly or unscalable compared to existing solutions. It's all just tradeoffs. Might make existing business models no longer work. Might open new ones.


> If you absolutely need to know if an image is AI generated, you can just have a central authority in the system watch the person draw the picture on a piece of paper. Or drive to your house and hand you the paper and pencil and watch you draw it in person. There are workflows or system designs that absolutely can and will solve for human-verified creation, they just might be incredibly costly or unscalable compared to existing solutions. It's all just tradeoffs. Might make existing business models no longer work. Might open new ones.

This is pretty much my point. Like you said, incredibly costly and unscalable. A non-solution! We're better off not pretending we can compute what is and isn't real.


I don't agree, I think it's just a matter of the right mix of distributed trust and creating incentives for honesty and penalties for dishonesty. As well as those costlier mechanisms for verification to be available for a subset of cases.


Yes people are working on it. The thing you’re missing is that many of the contexts where the money actually is being spent is not really relevant to the public discussion around AI generated content. It’s more about making sure that nobody gets a bank loan using an AI generated voice and face, or that people don’t get scammed by a deep fake of their relatives, or that your government office isn’t being slammed with subtle propaganda, for instance. The trick to your concern is to change your expectations of accuracy. Flagging something as fraudulent with an ML is not treated by these systems as if it’s actually being fraudulent.


Yes, multiple teams are working on it, private and public.

>It is such a ridiculously bad idea I'm dumbfounded that anyone smart would be trying to do it.

Agree with you there.


I know companies that are trying to do this, yes. The detection tools are terrible, but places really do not want writers submitting AI-written content, for example.


> but places really do not want writers submitting AI-written content, for example

And I want the power of flight, but it isn't going to happen!


if there's money to be made, then there'll be people who'll try and make it. doesn't matter how aware of the philosophy or ethics you are. humanist intelligence rarely comes into the equation when money is on the table


> This is just legal cover until such time as its possible to enforce no child exploitation imagery, no copyright stuff, etc.

If the art used in a game violates copyright or contains imagery of exploited children, ban it of course, but what does that have to do with whether it was generated via AI or created in another manner?

If anything AI generated art should be _less_ susceptible to copyrighted stuff because everything is original (even if it's not in original style)


Because AI IP law is murky, and that's all Valve cares about.


Going public about your awareness to a problem necessitates an enforcement response to be considerable.

Imagine you are a trademark holder and someone is using your IP but you don't enforce your trademark by litigating. Your claim is weakened.

It shows the public and the court how significant this problem is for your party.

Edit: copyright -> trademark


You seem to be confusing copyright and trademark. Copyright isn't diminished by non-enforcement. A trademark risks being invalidated or genericized when not enforced.

Intellectual property is an encompassing term that seems to lead to this sort of confusion.


Thank you!!


That seems pretty sensible. There have been lawsuits from artists before. Do you want to risk a game selling 10m copies and then it turns out that all the art was just copied and pasted by the "AI" and Valve is now on the hook.

Also from a store perspective, any game where shortcuts like this are used tend to be shit games. They don't want spam games to be pumped. There's already enough indie trash platformers that nobody wants.


There are already successful Steam games known to have used AI art. And this is only in the cases of the developers publicly admitting that.

High On Life used Midjourney for it's ingame posters - https://store.steampowered.com/agecheck/app/1583230/ - https://www.thegamer.com/high-on-life-ai-generated-art/

And Firmament https://store.steampowered.com/app/754890/Firmament/ - https://www.pcgamer.com/firmament-ai-generated-content/

So is Valve now going to remove these games off the store? This seems like a very terrible way to handle this - they need to make clear rules and make a public statement, not just start banning apps that they sense use AI art.


Can you explain to me how "you must affirmatively state that you own or have licensed rights to the training data (and if you're lying, the legal responsibility is yours and not Valve's)" is not a clear rule?

And yeah, they should kick those games off for using copyrighted materials that they do not own.


This is a rule developers are just finding out now from a game getting rejected. Pretty major deal if multi-million dollar budget games like High On Life should now be banned (even worse if they don't ban it now, making the rules unclear). It should have been a public statement, with a clear change to their developer terms.


No, the developers just need to “affirmatively confirm” that the own the copyright on all the works in Midjourney’s training set, and they are good.


I'm sure some indie studios will sign whatever, but as soon as a large studio uses an public model Steam will have to roll over on this one.


> Also from a store perspective, any game where shortcuts like this are used tend to be shit games. They don't want spam games to be pumped. There's already enough indie trash platformers that nobody wants.

I find this hard to agree with. A game engine is a "shortcut" too, I can imagine people saying at some point anything developed with Unity would "tend to be shit games".

Associating quality with visual fidelity anyway is wrong, look at Terraria, I'm pretty sure anyone semi competent with AI generation could produce better assets, but it wouldn't help them produce a better game.

People will use gen AI art in good games, and people will use gen AI art in terrible games.


> Also from a store perspective, any game where shortcuts like this are used tend to be shit games.

Yeah, they indicate that they have already submitted multiple games with AI generated assets, and submitted this one "with a few assets that were fairly obviously AI generated." Maybe I'm being unfair and they are making really good games, but these are not good indicators to me.


These tools have only now become available. I can imagine a game where these tools are used to develop a large world and backstory previously not possible. The main images and text may be hand crafted, but you might walk down a street where the other building are all unique, have names, and descriptions. It could really flesh out some of the procedurally generated games out there. Or it could be terrible. Or it could be good for being so terrible. It shouldn't be rejected entirely just yet.


It's not being "rejected entirely". That is mendacious editorializing. Generative AI products are being rejected unless you affirm that you have rights to the entirety of the training data set.

What's wrong with that?


That you probably don't need rights to the training set in the US, unless Congress changes copyright law. This is being litigated.[1]

Humans can look at a collection of copyrighted images and draw a new picture. The legal basis for holding AIs to a higher standard is weak.

Current litigation: [1]

[1] https://www.theverge.com/2023/1/16/23557098/generative-ai-ar...


> That you probably don't need rights to the training set in the US

Even if that is true--and it's not sure to be--Valve is within their rights to demand additional coverage.

> Humans can look at a collection of copyrighted images and draw a new picture. The legal basis for holding AIs to a higher standard is weak.

Horseshit and worse words. Computers aren't people. They create derivative works from pushing inputs through mathematical models. The inputs are unerasable and the claims to the otherwise by the AI hustler class exist only to be able to profit off human effort without paying for it.


> Valve is within their rights to demand additional coverage.

As a near-monopoly gatekeeper, Valve is vulnerable to antitrust charges.

FTC is apparently about to go after Amazon.[1] US antitrust policy did far too little for too many years. EU competition policy is more aggressive.[2] The EU competition authorities already fined Valve for selling geo-blocked content that only worked in some EU countries. That's a violation of the basic Single Market rules of the EU.

[1] https://arstechnica.com/tech-policy/2023/06/ftc-prepares-the...

[2] https://competition-policy.ec.europa.eu/sectors/ict/cases_en


> That you probably don't need rights [...]

And if Valve doesn't want to take this risk? To reiterate eropple, what's wrong with that?


AI doesn’t copy and paste art though, it generates new art based off of patterns it’s seen in its training material. If it’s training material heavily features red squares, and you prompt it to generate a new piece, chances are you’ll get something with a red square - not because it copied that red square from any particular piece, but because it was a common element. There’s a difference between reproducing common elements in pursuit of adhering to a style, and copying and pasting.


https://arxiv.org/pdf/2301.13188.pdf

Stable Diffusion spits out slightly blurrier versions of the pictures in its training set.


But that’s not the default behaviour of these models at all. Instead you have to work pretty hard to extract these original images.

When using the models normally, they do generate new content, even if the style and subjects are of course interpolated from training data.


If you deliberately try to make it do so. Even then it usually doesn’t work.


It can do that for some images, and that's a training mistake.

There's no way it can do that for more than a tiny fraction.

"we bias our search towards duplicated training examples because these are orders of magnitude more likely to be memorized than non-duplicated example"


Title is very misleading for something that the only the only evidence of is a anecdote from reddit. Was expecting a statement from Valve based on the title


Interesting. So products that use AI generation as pet of an API, say using a diffusion model to generate different stylings for the walls and textures for a level creator, would fall under this?

Guess it’s time to ask for forgiveness rather than ask for permission and not let Valve know where my art assets are coming from in my web-based API.

If I were making a game I’d just lie and lie at this point.


AI code generation is OK though! Because they can't detect it?

It's cool to see the development of new ethical standards in response to new technology. If I could get an option for ethically-sourced AI, which only uses public-domain art / text / code for training, that'd be nice.


For a very long time music producers would pirate their samples, plugins and presets. The idea was that nobody could tell how illegal these tools were in the finished product, so there was no reason not to steal. It was genuinely the gold standard for a while, and even established artists like Diplo, Porter Robinson and Kanye West were caught pirating content en-masse.

Nowadays there isn't the same attitude so much. Many people still pirate sounds, but skeptic listeners will sometimes ask musicians to show off their project files to embarass them over how many pirated Cymatics drums they use and their version of Sylenth licensed to "RuTorrent".

It wouldn't surprise me if the same thing happened today. AI-assisted development will take off for a while, and then people will ask self conscious questions like "nice art, who's your art director?"


> Many people still pirate sounds, but skeptic listeners will sometimes ask musicians to show off their project files

And the musicians comply? Weird.


I mean, not always. It's hard to be super secretive in a live situation, but I'd wager many musicians have successfully hidden their pirated plugins.

A lot of people have been caught anyways. Steve Aoki accidentally left a visibly pirated Sylenth VST in a promo vid, Porter Robinson and Skrillex both got caught with pirated plugins during track breakdowns, Kanye West posted a video with 30 tabs of The Pirate Bay open to download Logic Pro... the list goes on. It was extremely common in the early days of digital music production (and still is today, to an extent), but the backlash has pushed most legit production houses to legit licensed software.


I guess we need clarity on whether using copyrighted material is covered under fair use.

GenAI clearly meets the "transformative" standard.

OTOH it seems likely that it will have difficulty with the "Amount and substantiality" as it considers the whole art work, OTOH this is not necessarily a hard barrier given the "transformative" nature.

My guess is that the "Effect upon work's value" standard vs. the "transformative" standard will be the area where there is most action. Clearly, in aggregate, GenAI will have great impact upon works value. However, this is not the usual standard (it is about individual works), and I would argue that this would be creating new law by the courts.

Hopefully we will get a case to the supreme court to resolve this, quickly. I think that this is a boon for humanity and I would like to see the cuffs taken off as quickly as possible.


> GenAI clearly meets the "transformative" standard.

IANAL, but the problem is that fair use is an affirmative defense and is decided for each case separately. One GenAI may be transformative, while others may not, depending on how much of the original training data they throw back at you.


Of note: they aren't banning AI generated graphics. Rather:

> we are declining to distribute your game since it’s unclear if the underlying AI tech used to create the assets has sufficient rights to the training data.

It's not AI generate graphics. Instead, it's AI-generated graphics where the rights to the training data cannot be established. I think that's an important distinction.


This will change within 6 months I promise you, EA/Ubisoft/etc will ALL be shipping AI generated textures in games before the end of the year.


That's not a change from this though, as long as those AI generated textures are from systems that were trained on images that they have permission to use (or are copyright free).


No such system exists so they won’t.

Even Adobes system has questionable training data mixed in


Every major game publisher has an enormous archive of in-house artwork, and probably a decent amount of hardware they could use for training. They'd be stuck with open source software like Stable Diffusion - or have to cut a deal with Midjourney or whoever - but there's no barrier to creating such a system.


Stable Diffusion nor Midjourney can be used to create artwork under these Valve guidelines.


And only a matter of time until a major game lets you talk to NPCs by generating dialog with an LLM.


It'll be interesting to see if a game can make good use of that, but it sounds like it would lead to some very frustrating interactions.


The first versions of this won't be using it to create characters with plot significance and great depth, they'll be generating infinitely large piles of background character chatter. No matter how big your budget is, you'll never be able to to make a world full of random civilians who don't sound repetitive if every line has to be recorded in advance.

Any prerecorded line having a slight bit of personality to it ("I used to be an adventurer like you, but then I took an arrow to the knee") becomes very noticeable after the first time you've heard it, so the safe thing to do is have lots of the most inane chatter possible ("Nice weather today!") to make it stand out less.

If the game can write new lines and synthesize voices on the fly, you could have more interesting lines without the repetition.


Have you tried it? As in, prompt the LLM that it's the character you want and then converse?


I've used GPT for DnD type stuff. It's really good if you don't have an existing expectation of the world you're in, as everything should be a hallucination. Within the context window LLMs are pretty internally consistent, so within a chat session it'll be surprisingly coherent and human-like. But inside of a full game world like Skyrim, each hallucination could lead to wasted time finding the magic scroll the LLM told you about but never existed. If you can think of a style of game where hallucinations aren't a problem, or a way to put the LLM on rails so that it won't hallucinate too much then I think players will have a good time.


Of course it will change and Valve will even release their own game that's generated with AI.

Governments and companies everywhere trying to lock out small time people today before they get too much traction with AI generated content. They know indie devs will never be able to prove their model is only trained on their art. Only massive companies with billions of dollars can do that right now.

Every big company is trying to create rules to ban AI but keep a small enough loophole that they can use it when the time is right.


> At this time, we are declining to distribute your game since it’s unclear if the underlying AI tech used to create the assets has sufficient rights to the training data.

This seems like a completely fair response from Valve. On top of that, they gave them notice and an opportunity to remove the offending content (with that content explicitly called out) and offered to refund if that was not a viable option.

If this was an iOS/Android app, they would have just been told to pound sand and swallow the dev fee. Good on Valve for not lapsing communication here.


It sounds like they even looked into the specific AI the gamedev said they used:

> we reviewed [Game Name Here] and took our time to better understand the AI tech used to create it.

And offered a refund on the $100 app submission credit:

> App credits are usually non-refundable, but we’d like to make an exception here and offer you a refund. Please confirm and we’ll proceed.

Seems incredibly reasonable.


Organic only content -- here we come.

Not that it bothers me, but I feel oddly validated that appears to be the path taken. It makes sense, even from just purely 'we can't review it all' perspective.


I think it's fine.

You have to have rights to do AI things with the content of your datasets. No more "download the whole internet" or "create image generation models from the scraped contents of a stock image provider".

I think it's going to turn into a new class of copyright permissions.

Along the lines of

> thou shalt not make a machine in the likeness of a human mind

More like

> License is hereby given for the consumption of these contents by human minds


Well, it's not like they're The App Store and controlling everything you can install. You can still put AI-assisted software on any machine that can install Steam, they just don't want to deal with the legal implications of hosting the dubiously-generated content themselves.


So using Adobe Firefly is fine, since they only trained on data they had the rights to?


Yes,

You can assert you own or have the rights to those images, based on your license with Adobe.


How do you prove that the images where generated using Adobe Firefly instead of SD or MJ?


They don't seem to be asking for proof.

> we are declining to distribute your game since it’s unclear if the underlying AI tech used to create the assets has sufficient rights to the training data

So it seems that asserting "assets X and Y were generated by tool Z that has rights to its training data" would be sufficient. Presumably AI tools will also start to formalize that declaration alongside their terms of service.


Are there similar things where you have to declare something that is in no way provable in game development? It feels kinda silly.


Yes, its fairly normal for distribution platforms to require you to declare that you have the rights for everything in your game (which is in no way provable), extending it to everything in the training set(s) for the model(s) used for generative AI used is ludicrous, but just amounts to the same thing plus the (almost certainly false) assumption that every work produced by AI generation is legally a derivative work of every work in the training set(s) of the AI model(s) used.


>which is in no way provable

I guess in that sense it's not provable by the platform, but it is provable by the actual copyright owner if you're infringing one.

What I mean before is that no human can know whether you used Firefly or SD or MJ or some other custom model.


Sort of - the person uploading to adobe stock can upload copyrighted material. Adobe will handle copyright claims and be liable up to 10k worth of damages.


Only 10K?


How would you be able to know if something is AI generated if it's not outright stated in the product description?

"Yes, I intentionally designed the static image of this man to have 5 and a half fingers on one hand with a distorted logo on their t-shirt, please allow this game, Valve."

How can you prove that something is AI generated? Would creating graphics in Adobe's photoshop AI filler tool count as AI-generated content to Valve, or is Adobe's AI data-set using copyright-free graphics?

I wonder if this is Valve trying to also somewhat cater/attract artists on the platform, as I'm sure artists are against using AI under the guise it'd "steal their jobs/hamper creativity".


I think the idea is that if someone gets sued for the AI art in their game, Steam plans to point to their terms of service and say the legalese equivalent of "they promised us that it didn't have AI art, if they didn't lie to us we wouldn't have hosted their game", and not also get sued.


This is obviously nonsense because games and artists have been using AI and procedural generated content for decades, everything from textures, models, maps, animation, music and sound. Generative models are now even integrated in the NVIDIA drivers for upscaling and every photo you take with a recent samsung phone uses generative AI.

Just because now generative AI has made a significant leap doesn't mean its anything new. And copyright is irrelevant because models are clearly derivative works the same way artists remix existing works of art, if that were to change, copyright law would destroy the majority of all creative endeavors.


> games and artists have been using AI and procedural generated content for decades

These are not the same things. Procedural generation is not the same as feeding different prompts to a model until it vomits up something that looks sort-of like what you want.


https://i.imgur.com/JWjLFlO.png

now that's how you know when a comments section is gonna be amazing


I do not believe this will be limited to Valve. I expect more companies to start covering their backside by implementing similar rules to avoid copyright lawsuits. I can't say I would blame them either. I am not a lawyer but I think one of the risks is that LLM's do not show their work so proving where something came from is likely to end up in court after a lot of expensive discovery is performed.


Engineers have created it and lawyers have ruined it. It's interesting how whole professions can be inherently constructive or inherently destructive.


Engineers, well companies, created an entire industry without any regards for how that might affect artists... again.

Musicians are still being screwed over because engineers wanted change how music is distributed. The goals are noble enough, just as with the music, but large corporations inserted themselves in the middle to capitalize on the work of the artists. I can't fault artists teaming up with lawyers again in an attempt to be paid for their work. It's didn't really work out for the music industry, but hey, what can they do?

As engineer we clearly aren't on the side of the artists, we help companies in the middle, not the artists. When developers created ChatGPT, or Stable Diffusion, did anyone of the developers insist on building in licens tracking, to ensure that only work in the public domain or under appropriate licenses was used, or at least tracked?

We're once again trying to build a new industry, but we don't care how that might affect others. It's dumb, it's not like there wasn't enough publicly available material, it's just that it's cheaper to ignore licensing.


Hang on. Imagine that these image generators were based entirely on public imagery, but work equally well.

Clearly that would clarify the legal situation, and it’d be a lot harder to ban it. But would that help artists? If so, how?


Musicians were always screwed over by the music industry, not engineers.


Or are engineers trying to destroy the livelihood of millions of artists, and lawyers are protecting it?


Smash the looms! Smash the looms!


Looms weren't trained unwillingly by the people they replaced. You're thinking about outsourcing.


But they were. Weavers improved on their processes for a long time before the engineers swooped in and put that accumulated knowledge into an automated form.


And they were worried it would be used to disenfranchise them and enrich the factory owners at the expense of themselves/laborers, and surprise! they weren't wrong.


Actual title is:

"Valve is not willing to publish games with AI generated content anymore"


Does that mean High on Life is now banned? If I recall correctly, they used AI on purpose to give ads and billboards a nonsense alien feel.


Did you read the article?


atomic heart, high on life, hawken reborn, observation duty and probably many others, but no they are not banned.


Hypocrisy is alive and well


Or maybe the developers of those games have given their assurance to Valve that they own all the relevant copyright, and therefore the same standard is being applied?

> As the legal ownership of such AI-generated art is unclear, we cannot ship your game while it contains these AI-generated assets, unless you can affirmatively confirm that you own the rights to all of the IP used in the data set that trained the AI to create the assets in your game.


If they claimed that then they lied because we all know those games studios didn't train their own Image gen models without relying on the base SD models


Maybe they did, maybe they didn't. But if the developer is willing to sign a document saying they own the rights, that's probably good enough for Valve to feel their ass has been covered.


Wonder if using (Japanese) anime based AI assets would be workable instead, as the licensing situation there sounds a bit clearer?

aka "copyright doesn't apply":

https://cacm.acm.org/news/273479-japan-goes-all-in-copyright...


Random reddit anecdote.

And from 23 days ago.

AND misleading clickbait title.


Seems shortsighted and overly limiting to me. Perhaps in this specific case it makes sense?

What's the difference?

A) Human creates artwork in the style of [insert artist here]

B) Computer creates artwork in the style of [insert artist here]

Both "trained" against existing copyrighted works except one is human. Is this to "save jobs"?


The models which can prove the progeny, and valid licensing, of their source assets will become increasingly valuable with time.

This gives social networks an edge, which often have EULAs that allow the business to use uploaded content _at least_ internally, if not commercially.

_And_, in the short term, there's an opportunity for someone to pay armies of artists to create _decent renditions_ of existing styles and known works. It's not a copyright violation if a human being mimics another human being in creating a new, original work.


After seeing that Unity Muse[0] AI presentation yesterday and the following backlash regarding the source material[1], this seems to be a huge legal minefield to be solved first.

[0]: https://www.youtube.com/watch?v=dR4IuN2tF78

[1]: https://nitter.net/unitygames/status/1673650585860489217


Without seeing the art there is nothing to judge. I bet it's a visual story and they are using characters from established ips.


100%, my guess this is someone putting Disney or Nintendo characters in compromising situations and Valve would have rejected it AI or not


>I improved those pieces by hand, so there were no longer any obvious signs of AI

Steam's objection is other copyright even indirectly in the AI training dataset and to remove it, not to conceal the issue better.

Tricky copyright questions aside, inability to follow basic instructions is definitely a disadvantage when going through approval processes


What sounds very weird to me, is that I doubt Valve is verifying the copyright for all the graphics and text you submit. Why would they reject something because "it looks AI generated"? The potential for legal hazard is probably less on these re-mashed works than on purely copy-pasted content.


A moronic policy if true:

1. Copyright Office made it clear that AI generated output is generally not copyrightable irrespective of training data: https://www.federalregister.gov/documents/2023/03/16/2023-05...

2. In fact, there is credible legal theory that goes as far as to conclude that training dataset license cannot effect the resulting weights under US law (EU's take on copyright make it less clear)

3. DMCA already provides Valve with legal cover in the unlikely event that training dataset license is somehow found to effect the IP rights of generated content.

4. By adopting this policy, they are acting more as a traditional publisher rather than a platform, thus exposing themselves to even more liabilities not less.

This policy makes no sense to me no matter how I look at it.

The harsh reality that Valve and everyone else needs to accept is that AI-generated content from "unclear" datasets is here to stay. People need to accept this fact^1 and move on. I already have.

1: Copyright is an incredibly limited, obsolete, broken invention that was never meant to handle concepts like this. It's very much like a poor analogy that we stubbornly insist on applying to situations were it simply does not fit. We will continue to find ourselves arriving to bizarre and nonsensical conclusions like this as long as we continue holding onto this broken invention. Reward authors in another way. Placing arbitrary limitations on information was never the right way to do it.


Good luck enforcing this. You can generate textures all day with no evidence that they were AI generated.


You can also ban games from your platform for even the vaguest suspicion that they contain AI generated assets, or probably even for complaining about the policy.


Depends if the game publisher is willing to run the risk of their game getting yanked from sale


Where do they draw the line? What about DLSS? Doesn't any game using that have "AI generated graphics"? I guess their email wording focused specifically on assets. Does that mean if you don't pre-bake assets in as artifacts, you're fine?


With DLSS Steam isn't hosting and distributing AI generated content, Nvidia is distributing it with their drivers. So any liability regarding the source of the training datasets falls on Nvidia, and there is no reason for Steam to get involved.


That seems reasonable at first glance, but how is that different from the game downloading generated content or models used for inference from its own servers after Steam distribution? If you argue that the code being used to access it is distributed, then what about the code to integrate with DLSS APIs on Steam distributed games?


There is very clearly a line most people will understand between what DLSS does, and generating completely new art that mimic existing intellectual property.

No one is ever going to accuse DLSS of creating new art works containing some other legal entity's existing IP for example, its literally just a (very clever) upscaling of the original art. If it did, it would presumably render the game being upscaled almost unplayable as it would be changing the output to a state unrecognizable from the input frame.


There may be a line between there somewhere, but it's not at all clear where it is. What about generating foliage or varied ground textures? What about generating buildings? Or NPCs? Also, the "just upscaling" relies on training data from outside your own game. Why is that okay when the rest of this isn't?

I totally get why Valve is taking the stance that they are. I imagine its hard even for them to know where to draw the line (evidenced by how long the turnaround time was on the support response).


> What about generating foliage or varied ground textures? What about generating buildings? Or NPCs?

This isn't how NVidia DLSS or AMD FSR works at all. DLSS/FSR can't create new buildings or foliage, or NPCs, so hard to foresee problems of the kind Valve are concerned with. Same for varied ground textures- the entire point of the technology is to sharpen and upscale the original image, or in case of DLSS3 inject new matching frames.

The only "risk" to DLSS is in Nvidia's own training data, but there is no "risk" of another company's existing IP leaking into final frame - again if there was, gamers wouldn't want to use it, as its destroying the original frame! If the resulting frame isn't a near perfect match for the original, DLSS has failed. Thankfully it does nearly perfectly match the original in use and alongside AMD's FSR 2.0 stuff has been one of the best advancements in gaming technology of recent years - effectively significant FPS boost "for free" on same hardware.

> https://www.nvidia.com/en-us/geforce/technologies/dlss/

> https://www.amd.com/en/technologies/fidelityfx-super-resolut...

While the line may be gray for other AI technologies in gaming, such as using it create new original textures or models, DLSS/FSR is just a really good upscaler - no "new" content being created and therefore no risk of IP infringement. To be really blunt; FSR and DLSS are in almost every new game for the last couple of years on both PC and console across literally hundreds of games now - if there was IP infringement issues, we would know by now - we are already onto second/third generations of these upscalers.


Apologies if I was unclear. I didn't mean to suggest that DLSS generates textures. I was raising the question about whether or not generated textures would be in violation of the policy. And if they're fine, what about generated buildings or NPCs? It's fine that DLSS is considered compliant. My point is that it's not at all clear where we draw the line.


DLSS is trained on the games themselves no?


DLSS requires no training on your own game to use - you just use the pre-trained system NVidia provides, IIRC. So it has been trained on game data, but not necessarily yours.


It used to be trained on specific games, but I think most of the time it isn't nowadays outside of a few major titles


There isn't one trained model per game I think, to update DLSS manually you just replace a .dll file.


I wonder if they'll do the same for Ai generated text? Why shouldn't they honestly. I could easily finetune an LLM on the writings of a certain author or maybe the content of Mass Effect 1-3 and have the outputs be similar.


Interesting... what would an AI trained only on the Commons/public domain be like? Would it be a clean source for new images? And would new images need to inherit a public/Commons license (GPL style)?


> And would new images need to inherit a public/Commons

Well first we need to know if using images for AI training can be considered fair use.


I thought Adobe Firefly did that.


And some non-public-domain images of their own.

> The current Firefly generative AI model is trained on a dataset of Adobe Stock, along with openly licensed work and public domain content where copyright has expired.


Not only does this keep them safe from copyright fallout, I think its real goal is to hold back a flood of shit games until the tech matures.


This seems utterly impossible to enforce. You really going to guarantee that your design firm didn't use AI to generate the assets?


It's a CYA thing.

Steam says "we don't allow AI content".

Someone shoves AI content on the platform anyways.

If it can be proved they violated the TOS, they then have the ability to nuke their game and stop someone from suing them. If they can't prove it, well they can't prove it and the game stays.

To do otherwise opens up the door to steam having to "vet" all the AI content. So yes AI content will slip through (in massive droves) but it will be indy scene.

The biggest impact here is going to be AAA devs who can't just neglect to mention they used AI at some point. This is actually the first thing that could "kill" steam or give Epic a competitive advantage. There's 0 doubt that companies like EA/Activision/whatever want to jump all over AI to make yearly releases like FIFA even cheaper, and if Epic is willing to say "come on over" we might see epic exclusivity for that reason alone, rather than the current "here's a pile of money to make up for all the sales you'll miss out on when no one remembers your game released"


If Steam (or probably more likely Steam's lawyers) is/are worried about liability, shouldn't Epic be also? The lawsuits are firing up, I wouldn't want to be the whipping boy chosen by the copyright holders to punish.


I would assume so, but I'm faaaar from an expert.

My amateur opinion is there's too much money to be made for this to be stopped forever (we SHOULD rework copyright but we'll probably just make some dumb rule when disney decides how they want to handle it), so epic might just say "the courts will side with us eventually and the benefits are waaay too high to ignore"


I don't think it's really about enforcement. It's more about liability.

Valve has an official position that they don't allow AI content (apparently). When the lawsuits show up they can say that they don't serve any AI content as official policy. When someone points out the AI content that they do serve, then they pull out their expert witness that testifies that their AI detection method is as good as possible and they couldn't haven been expected to do any better. Meanwhile, they're more than happy to remove anything explicitly flagged that falls through the cracks.

Finally, I suspect that anyone who can prove that they're able and willing to indemnify Valve against lawsuits for AI content that their game contains will be allowed to have AI content.


> Finally, I suspect that anyone who can prove that they're able and willing to indemnify Valve against lawsuits for AI content that their game contains will be allowed to have AI content.

Yes, they're quoted as saying that AI generated assets are permitted if the developer can "affirmatively confirm" they own all the IP in the training set. seems reasonable to me.


Yeah, I saw that quoted part, although, I suspect that if I show up with a bunch of AI assets that I can prove are 100% mine, then the reviewer is likely to error on the side of Valve not being sued.

Meanwhile, AAA blockbuster studio will almost definitely be given a pass with a wink and a handshake after saying, "Hey if anyone figures it out, we'll take the blame." For using assets that throw up multiple red flags.


It doesn't need to be 100%. It just needs to be a reasonable attempt. We all too often let perfect be the enemy of good. Impossible to enforce? That's not true. They did so right here.

Yes, people will work around it, and some will slip through the cracks. That doesn't mean it's a useless policy with no impact.


So you’re in breach of contract and if it turns out later that you don’t own the copyright for what you’re distributing, valve get to plant that firmly on you.


copyright issues aside, a platform has to consider 'spam', AI generated content could quickly and easily overwhelm a platform.


See Clark's sci fi magazine.


Strong disapprove. Artificially attempting to slow progress just creates a massive power disparity for those who do not care.


> Artificially attempting to slow progress

Why do you think Valve is just trying to slow progress? Don't they win when people on their store win?

It seems more likely to me that this is CYA against the major lawsuits that are happening right now from copyright holders.


Ah yeah that is probably right. Chinese indie industry boom?


If this was politically or ethically motivated, I'd be inclined to agree. But it seems this is just a safeguard against lawsuits, which I can understand at least.


That's the wrong path, soon Internet for 90%++ will be mixed with AI. Btw - What if I write my game using Chat-GPT?


I recently tried to use it to learn the Blender API, without much success. It halucinates left right and center, so I had to go back to reading the docs, trial and error, as well as reverse engineering. I'd be honestly surprised if you could use it to create an entire game.


As long as you just use it for generating the code and not the assets, Valve doesn’t seem to care.


I imagine this is a stopgap measure until there's more concrete legislature in place for AI generated IP.


Outlast Trials appears to be a fairly large title that utilized AI art. I sincerely doubt it’ll get ax’ed


good news for their competitors


Not really, or at least, not yet. If Epic allows AI generated content, it will just attract devs that use AI generated content. I think those are more likely to be shovelware garbage today than anything else.

Until there's a killer/must-have game built with AI content, I don't think this is going to have much of a noticeable impact.


This doesn't seem to have anything really related to the AI-generation of the graphics--it's 100% about copyright. The statement from Valve even says that explicitly: if this user had owned the copyright to the training data, they would have been fine using the AI generated graphics and text.


We need an "AI generated web game tower defense" FULL FN STOP.


Well what if someone ships a model exclusively trained on legal content?


Then you tell Valve that and they say "OK"?


Would they though?

I think the legality of the dataset would be hard to prove. And I can see some game devs straight up lying, shipping SD 1.5/Llama, and dragging Valve into court when one of them explodes in popularity.


> I think the legality of the dataset would be hard to prove.

Could be. I hope it is. It's a reason not to use it!


This opens up one hell of an opportunity for Epic or a startup.


As I understand it, Epic already has a smaller and more curated catalog by intention. They're already trying to keep out the tens of thousands of low quality and hobbyist titles.


Fake story, guaranteed


Seems entirely reasonable. Just because Stability (who seems to be crashing and burning) decided to try and do a Napster for image generation doesn’t mean everyone else should run into the lawsuit abyss alongside them.


Sounds like a market opportunity.


Beginning of the end for Valve. This is a warning shot. Might want to stop buying games on steam sales.


i really hope the US copies Japan's ruling on this kind of thing.


This just shows me the future is people using AI tools to make their own games custom for them.


misleading title, hn should ban reddit links


How about Firmament?


By that definition, any roguelike should be banned. And, well, we're not seeing that.

I'll watch, but I disbelieve the reddit poster. Probably a CEO bot drumming up obvious bait comments over current computer events.


Basically valve is saying no AI generated content. The premise is that all AI generated content violates copyright which isn't necessarily true. To me, it sounds like their side stepping a potential issue rather than an actual issue with a copyright holder complaining of infringement with a particular IP.


They are saying no AI content trained on copyrighted work you don’t own. Thats a very different framing.


Yes I can see that now but that seems too broad. Hypothetically if a model's trained on a thousand images only which 10 images are copyright. Does that mean of all images generated violate copyright law...?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: