Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Using Generative AI in Content Production (netflixstudios.com)
177 points by CaRDiaK 23 hours ago | hide | past | favorite | 137 comments




This actually looks pretty good. The key takeaway I got was that they know their business is dependent upon Intellectual Property rights, and that Generative AI in final outputs or productive work undermines the foundation of their future success vis a vis discounting or dismissing IP Law and Rights.

That’s likely to be the middle ground going forward for the smarter creative companies, and I’m personally all for it. Sure, use it for a pitch, or a demo, or a test - but once there’s money on the line (copyright in particular), get that shit outta there because we can’t own something we stole from someone else.


Or they can do like Call of duty, that just makes skins "heavily inspired" by other franchises they don't own, the week Borderlands 4 came out they put a few cell shaded skins that heavily resembles the look of that game's characters, there is one that skin that is pretty much like reptile from mortal Kombat called "vibrant serpent", they got a bit of heat in May of this year for releasing a skin that looked too much like one from another game called High On Life, and the list goes on. It reminds me a lot of the disguises they sell on Spirit Halloween during every October.

And yes I know they do legal and agreed partnerships like with the Predator franchise, or the Beavis and Butt-Head franchise (yes they exist in CoD now...), and those only count for a tiny number of the premium skins.


The Call of Duty series makes me so sad. I remember when cod 4 came out it felt like a genuinely groundbreaking and innovative thing and I was so pumped to see what IW did next. And then Activision took all of that talent that was genuinely exploring new ground in game development and stuck them in the yearly rerelease of the same damn game mill until everyone got burnt out and left.

For the record, Arc Raiders (just released) makes me feel like I'm back playing MW2 in the golden days. Just in the sense of playing an awesome game and riding the wave of popularity with everyone else.

Thanks, I'd heard whispers but hadn't jumped in yet. I will need to check this out.

(platinum rating on protondb too woohoo)


I've been trying to find time here and there to get the tumbleweeds out of my gaming pc just so I can try that game. Reviews and streams for it remind me a bit of the Dark Zone experience when the first Division game came out.

It is a lot like the Division's DZ. Less toxicity out of the gate, but we'll see how that goes as time passes. They should've taken the "rogue" mechanic from that game.

Arc Raiders is a ton of fun though. Also recommend Helldivers 2 if you just want a PvE shooter. It tends to be buggy as hell but the core game experience is hilariously fun.


Good news is it can run pretty well even on a potato, they've worked really hard on UE5 for the game.

Arc Raiders, and their previous game The Finals, uses AI in some capacity for Voice Acting - though they do still hire VA and make it explicit in their contract offer

>Some of the voice lines were created using generative artificial intelligence tools, using an original sample of voice lines from voice actors hired specifically with an AI-use contractual clause, similar to the studio's production process in The Finals.

https://en.wikipedia.org/wiki/ARC_Raiders

Great game though, I'm really enjoying it too


Unfortunately games playerbases don't stick around long enough anymore for grinding hard enough to be worth it.

I thought they were on biyearly swapping with treyarch?

Cod4 in some ways was the beginning of the end for a lot that we took for granted in gaming up to that point. I remember when it released and a couple of us went to my friends house to play it. Boy were we in for a shock when there was no coop multiplayer like halo 3.


If MW didn't have co-op multiplayer on console than that's another example of the Mandela effect.

It had split-screen local multiplayer, but you couldn't play online in that configuration

Not me, the mix of parkour with multiplayer shooting with beautiful highly detailed maps it's something I like a lot, nothing even compares in that regard, I know the game is a shameless skin store but I do appreciate the former, although I also hate how small a lot of maps are, glances at Nuketown

They stole all the parkour stuff from Titanfall, which was made by the original IW founders when they left and founded Respawn ;)

(I use "stole" in a non derogatory way here - 90% of good game design is cribbing together stuff that worked elsewhere in a slightly new form)


> They stole all the parkour stuff from Titanfall

Which in turn was likely quite inspired by Starsiege: Tribes


Totally, Titanfall 2 is one of my favorite games ever, but by the time I discovered the multiplayer was pretty much dead, no players and no recent updates.

Good single player campaign too, if anyone is interested

I hate how parkour infested the fps genre. There's this whole meta now that I don't care about at all yet one has to learn if you don't want to go 3 and 12 and its in most games now.

It has been that way for decades, but prior the parkour stuff was exploiting bugs in game engines and only the top 1% or less of players could even pull off the complex inputs needed.

Personally, I was in the top 10% of HL2DM players but because I couldn't master the inputs for skating I wasn't able to compete with the truly elite tier players who would zip around the map at breakneck speeds.


parkour has been in fps since quake 2 with CTF and grappling hook.

> Generative AI in final outputs or productive work undermines the foundation of their future success vis a vis discounting or dismissing IP Law and Rights

It goes beyond just IP law compliance. Creativity is their core competency and competitive differentiator. If you replace that with AI slop, then your product becomes almost indistinguishable from that of everyone else producing AI slop.

IMO, they're striking exactly the right balance - use AI as a creative aid and productivity booster not something to make the critical aspects of the final product.


> get that shit outta there because we can’t own something we stole from someone else

How does anyone prove it though? You can say "does that matter?" but once everybody starts doing it, it becomes a different story.


It's partly about Netflix getting sued by someone claiming infringement, but also partly (maybe mostly) about Netflix maintaining their right to sue others for infringement.

The scenario looks like this:

* Be Netflix. Own some movie or series where the main elements (plot, characters, setting) were GenAI-created.

* See someone else using your plot/characters/setting in their own for-profit works.

* Try suing that someone else for copyright infringement.

* Get laughed out of court because the US Copyright Office has already said that GenAI is not copyrightable. [1]

[1] https://www.copyright.gov/ai/Copyright-and-Artificial-Intell...


You need to say you improved on the work of AI and it's yours.

Now you can sue


This scenario only plays out if it is known what was or wasn't made with GenAI.

It would become known during discovery.

How can you find out if an AI created something versus a human with a pixel editor?

In a legal case? You question the authors under oath, subpoena communications records, billing records, etc.

If there's even a hint that you used AI output in the work and you failed to disclose it to the US Copyright Office, they can cancel your registration.


Are you kidding me ? Everyone knows it's pirated content (aka stealing), there are a ton of proofs here and there:

- https://arstechnica.com/tech-policy/2025/02/meta-torrented-o... - https://news.bloomberglaw.com/ip-law/openai-risks-billions-a...

Other than that, just a bit of common sense tells you all you need to know about where the data comes from (datasets never released, outputs of the LLMs suspisciously close to original copyrighted content, AI founders openly saying that paying for copyrighted content is too costly etc. etc. etc.)


Any one with a brain knows it is not stolen, but nevertheless the fact that people will claim so is a risk.

It is stolen on a cultural level at least.

But since many of these models will blurt out very obviously infringing material without targeted prompting, it’s also an active, continuous thief.


Yeah. No. This document says, “our strategy is wait and see.” It’s the most disruptive media technology since the TV. And they’re like, “whatever.” That is not the move of a “smarter” creative company. Lawyers are really, really bad at running companies, even if you have strong opinions about the law.

Disruptive does not mean good, or useful, or important, or valuable. There is no reason to jump onto a thing early just because it is disruptive: Netflix exists in a different creative world than the tech industry, and its audiences are even more hostile to the idea that AI is being used to steal from the things and people they admire than the audiences of typical tech industry disruptions. People who care about art and artists and films and actors tend not to value slop.

Nobody values slop, and not everything is slop, AI or otherwise. Also, stealing is not the same as copyright infringement, unless you subscribe to the RIAA definition of the word.

AI has no intent or creativity, so it can be neither right nor wrong, neither good nor bad.

So just as there's no procedural difference between an AI getting something right and an AI "hallucinating", if the word "slop" describes anything AI generates, it describes all of it.

Either everything generative AI creates is slop or nothing is. So everything is.

Also I know stealing is not the same thing as copyright infringement. I'm talking about stealing livelihoods as much as stealing art.


> AI has no intent or creativity, so it can be neither right nor wrong, neither good nor bad.

Values aren't required for something to be good or bad. Outcomes are. A giant meteor strike causing a global firestorm & brief ice age causing mass death is bad, but giant meteors have no values.


I am talking about intent and creativity, where values quite obviously are required, because both are meaningless without them. Which is why slop is slop.

But then I vote Giant Meteor/Pestilence 2028. They will deliver what they promise.


>AI has no intent or creativity, so it can be neither right nor wrong, neither good nor bad.

AI is just a wrapper around a tool - it doesn't need intention or creativity because those come from the user in the form of prompts (which are by definition intentional)

It's just a Natural Language Interface for calling CLI tools mostly, just like how GUIs are just graphical interfaces for calling CLI tools, but no one thinks a GUI has no intentionality or creativity even when using stochastic/probabilistic tools

Anything a user can do with an AI they could also do with a GUI, it would just take longer and more practice

>Either everything generative AI creates is slop or nothing is. So everything is.

But then how do you know something is slop before you know if it's made with GenAI? Does all art exist as Schrodinger's Slop until you can prove GenAI was used? (if that's even possible)


Shouldn’t be particularly surprising Netflix is leaning in here - they’ve been pretty open about viewing themselves as “second screen”/background content for people doing other things. Their primary need these days is for a large volume of somewhat passable content, especially content they can get for cheap. Spotify’s in a similar boat and has been filling the recommended playlists up with low-royalty elevator music.

"Generated material is temporary and not part of the final deliverables" sounds like they are not looking to generative AI for content that they will air to the public.

Later on they do have a note suggesting that the following might be OK if you use judgement and get their approval: "Using GenAI to generate background elements (e.g., signage, posters) that appear on camera"


"If you can confidently say "yes" to all the above, socializing the intended use with your Netflix contact may be sufficient. If you answer “no” or “unsure” to any of these principles, escalate to your Netflix contact for more guidance before proceeding, as written approval may be required."

They do want to save money by cheaply generating content, but it's only cheap if no expensive lawsuits result. Hence the need for clear boundaries and legal review of uses that may be risky from a copyright perspective.


Yeah, that's a fair assessment. The specific mention of "union-covered work" plays to that interpretation as well:

> GenAI is not used to replace or generate new talent performances or union-covered work without consent.


Yeah, I read the "Talent" section and it's very balanced. I can't see much, if anything, to complain about, so thank goodness for SAG-AFTRA. The strike a couple of years ago was well judged.

They also mention reputation / image in there. If I can’t tell something is generated by AI (some background image in a small part of a scene), it’s just CGI. But if its the uncanny valley view of a person/animal/thing that is clearly AI generated, that shows laziness.

Yup. Everything will be muzak in the end.

But what word should we coin as buzzword for “Netflix-Muzak”?

And when we're saturated with it all, we'll start buying DVDs (or other future media) again.


Tbh I think theseguidelines are just anticipating future trends.

I see a big one missing:

* fully-generated content is public domain and copyright can not be applied to it.

Make sure any AI content gets substantially changed by humans, so that the result can be copyrighted.

More importantly: don't brag and shut up about which parts are fully AI generated.

Otherwise: public domain.


> fully-generated content is public domain and copyright can not be applied to it.

Some people keep saying this but it seems obviously wrong to me.

At least in the United States, “sweat of the brow” has zero bearing on whether a work is subject to copyright[1]. You can spend years carefully compiling an accurate collection of addresses phone numbers, but anyone else can republish that information, because facts are not a creative work.

But the output of an AI system is clearly not factual! By extension, it doesn’t matter how little work you put in—if the work is creative in nature, it is still subject to copyright.

1: https://en.wikipedia.org/wiki/Sweat_of_the_brow#United_State...

(IANAL, yadda yadda.)


Sweat of the Brow is irrelevant. Only humans (or collections of humans) can create a work that gets covered by copyright. Non-human animals cannot create copyrighted works, even intelligent ones. Humans can apply sufficient creative transformations to non-copyrighted works to create copyrighted works.

A human did create the work. A human turned on the computer and pressed the button.

> * fully-generated content is public domain and copyright can not be applied to it.

Simpler yet - and inevitable, on sufficiently long time scales - is to dispense entirely with the notion of intellectual property and treat _all_ content this way.


This would remove the incentive to generate content, no? Copyright duration could be much shorter, but I think artists, writers, etc. would prefer the continuing protection of their work. (And I'm pro-copyright reform.)

It's true, before copyright existed, no one made any art at all, and they certainly weren't paid for it. Thanks to copyright, the large majority of artists have been well and fairly compensated for their work.

Okay, you're right. I mean, there was patronage for a long time, and then a good era of proper copyright protections. The modern system really does need a reform, I agree. But I don't think we should wholesale put everything in the public domain. I mean, AI scrapers already think that's the case, but…

I'm a full-time professional musician, and I don't know anybody (at least in bluegrass) who thinks that the system of IP is designed to protect us, or is in fact serving us in economic terms. It seems much more geared to protect spotify and apple than it does the musicians.

Last year, I cut Drowsy Maggie with David Grier (something about which I boast every chance I get :-) ), and part of our journey was listening to aging, nearly-forgotten versions to find melodic and harmonic ideas to harvest and revive. For this, we of course made heavy use of archive.org's Great 78 project - and at the very same time, the RIAA (who is supposed to represent us?!) was waging aggressive lawfare against the Great 78 project, to try to take it down.

It was just the height of absurdity.

Consider that since at least 2020, every grammy winner in both the bluegrass and americana categories (and almost no nominee) has been released DRM-free. And that many of the up-and-coming bluegrass and jam bands are now releasing all of their shows, directly off the board, licensed with creative commons-compatible licenses.

https://pickipedia.xyz/wiki/DRM-free


I don't understand this opinion.

The only leverage you have to stop Spotify from taking your music and publishing it without your permission is your copyright of the music.

In fact, every time I see a complaint about copyright it's always "we tried to do something at small scale for some noble purpose and couldn't because of pesky copyright laws," and it completely ignores the massive scale of abuse for profit purpose that would occur if copyright didn't exist.

Think of how AI scraped everyone's books without permission using the flimsy excuse that it's transformative work, except they wouldn't even need that excuse or the transformation. Amazon could just take everyone's books and sell it on Kindle, then kick out all authors because they only need to buy 1 book to resell it as if they were the owner of the book.


> The only leverage you have to stop Spotify from taking your music and publishing it without your permission is your copyright of the music.

There are a lot of challenges facing a band, including the frustrations of CDBaby and Distrokid. If you told me my music would just magically appear on Spotify without my having to lift a finger (and without having to implicitly endorse them by putting it there), that'd be a huge relief.

> it completely ignores the massive scale of abuse for profit purpose that would occur if copyright didn't exist.

"abuse"? If you can somehow make money by playing music that I've made, nothing will make me prouder. And whatever you're doing that's generating that profit, it will almost certainly increase the likelihood that I can plan a series of shows around it, which will in turn generate income for me. Who exactly is losing here? Where is the "abuse"?

> Think of how AI scraped everyone's books without permission using the flimsy excuse that it's transformative work, except they wouldn't even need that excuse or the transformation.

I'm already sold, you don't have to keep making it sound sweeter and sweeter.


What prevents me from stealing your work and selling it? Including the source code you wrote?

Well, what prevents you from doing that right now? The threat that I'll call the cops on you? Is that really how we want the internet to work? It's sure as hell not how I want my music to be perceived - I can't fathom wanting the state to intervene because some kid listened to something I made without permission.

You are welcome to "steal" anything I've ever made if it pleases you. And encourage your friends to steal it from you. If this process keeps repeating, look me up and let's book a show in your area, and we'll play our music _and_ demo our source code _and_ get you all dancin' and trippin' and having a merry old time.


Having spent some time in post-production, this reads more like a “please don’t get us sued”

This reads like a reasonable policy. More broadly speaking re: AI content: Sure, boomers scrolling facebook will continue to enjoy their AI slop baby and animal videos, but I think the fact that the term "AI slop" has become so commonplace reflects a bias (generally) against AI-generated content.

Each time I scroll LinkedIn and I see some obviously AI produced images, with garbled text, etc. it immediately turns me off to whatever the content was associated with the image.

I'd be very disappointed to see the arts, including film making, shift away from the core of human expression.

“You know what the biggest problem with pushing all-things-AI is? Wrong direction. I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes.” - Joanna Maciejewka


> but I think the fact that the term "AI slop" has become so commonplace reflects a bias (generally) against AI-generated content.

Is that just because we are at the very beginning stages of the technology, though? It is just going to keep getting better, will the bias against AI generated content remain? I know people like to talk as if AI will always have the quality issues it has now, but I wouldn't count on that.


I'm not convinced that AI image generation _is_ getting better at this point. If anything, it seems to be getting somewhat weirder-looking.

Like, I gather that prompt adherence has improved somewhat, but the actual output still looks _very_ off.


I think still images have improved remarkably, though some things still look off for people (they look too-flawless). Whereas video suffers from the strange fluidity and unnatural motion of things.

> though some things still look off for people (they look too-flawless)

I think _maybe_ there's an uncanny valley problem (and it may vary person to person). I found Stable Diffusion 1.5's output quite _bad_, say, but not as, I dunno, objectionable and wrong-looking as current models.

Video has always been a complete mess, remains a complete mess, I don't see any real path towards it not being a complete mess. It is, in fairness, a _much_ harder problem.


Is it going to get better? Because people have been saying that for years now, and while AI output is somewhat improved, many of the issues with it have not changed.

The problem with AI slop isnt the AI part.

It's that not every one has the talent to produce something of quality.

If you give a professional passionate chef, the same ingredients for a full meal, as your average home cook the results will NOT be the same by a far stretch.

Much of "AI slop" is to content what Macdonald's is to food. Its technically edible but not high quality


That’s an interesting way to put it, which asks the bigger question of (perhaps?):

Do we want a society where everyone can masquerade as an “artist”, flooding society with low-quality content using AI trained on the work product of actual artists?

The people doing as such do not have the talent they desire, nor did they do anything to upskill themselves. Its a short cut to an illusion of competency.


>Do we want a society where everyone can masquerade as an “artist”, flooding society with low-quality content using AI trained on the work product of actual artists?

The internet was sold to us with the promise that everyone could publish, and wasn't that great? So many voices, we will hear wonderful new things!

What happened? Enshittification. The rise of the antivax community. Empowerment of far right white nationalists across what had been the most (lower-case "l") liberal governments in the world. Signal being drowned amdist the noise of a bot-driven ad-hellscape internet.

No. We do not want a society where everyone can masquerade as an artist.

Unless, that is, we hate art.


Agreed. The double-edged sword of "giving everyone a voice" online means we also gave platforms to hate speech, low quality content, dangerous misinformation, the extreme optimization of extracting attention/money from people (ads, algos), etc.

I'm not for censorship, it's more just a reflection on human nature. I'm fairly pessimistic on AI "hopes" given what we've turned the internet into.


> Do we want a society where everyone can masquerade as an “artist”, flooding society with low-quality content using AI trained on the work product of actual artists?

Change the statement to: Do we want a society where everyone can masquerade as an “photographer”, flooding society with low-quality photos using cell phones, never having to learn to develop film, or use focus, or understand lenses...

Do we want a society where everyone can masquerade as an “painter”, flooding society with low-quality paintings because acrylics are cheap, the old masters made their own paint after all...

Why does it matter how it was created? It wasn't Bob Ross's "Joy of Making Incredible Art", it was simply the "Joy of Painting".

And people do enjoy content that, for lack of a better word, is disposable. Look at the "short dramas" or "vertical dramas" industry that is making money hand over fist. The content isnt high brow, but people enjoy it all the same.

> AI trained on the work product of actual artists?

Should we teach people how to play guitar without using the songs of other artists? Should those artists be compensated for inspiring others?

Some of this is an artifact of our ability to sell reproductions (and I would argue that the economics were all around distribution).

There is a long (possibly decades) conversation that were going to have on this topic.


I think in all of those other examples you provide, a person still had to do something.

- take a photo of a subject

- paint something

- pick up a guitar

Whereas asking the computer in the lowest effort possible to do said thing for you “draw me this”, “make a song that sounds like this”, requires zero effort/skill and results in no improvement of your own ability.


> the lowest effort possible

By that rational, Autocad is bad because it doesn't require the same skill as Draftsmen.

Effort, labor, is not a reflection of creativity or skill.

I have a decent sense of design, I can tell you if something looks off, but I say to designers all the time "I cant do what you do, take what I say with a grain of salt". I have rubber ducked non technical people and gotten questions that steered me in better directions...

AI Doesn't make me a good designer, AI doesn't make me a better coder.

Who is going to do a better job framing a house. Someone with 20 years of experience with a hammer and a hand saw or someone who has never built a house with a nail gun and a circular saw.

The AI has no taste, no talent, it simply does what it's told. The crappy content is a result of it producing what it has been asked to produce.


I think that's unfair to Mcdonalds

wow this is actually really solid from netflix like it doesn’t just hype up ai but sets real boundaries too i like how they focus on consent and data safety instead of just “use ai for everything” feels like they actually understand the risks around creative work and performers kinda refreshing to see a big studio taking the responsible route

One of the issues with using LLMs in content generation is that instruction tuning causes mode collapse. For example, if you ask an LLM to generate a random number between 1 and 10, it might pick something like 7 80% of the time. Base models do not exhibit the same behavior.

“Creative Output” has an entirely different meaning when you start to think about them in the way they actually work.


Creativity is a really ill-defined term, but generally it has a lot more to do with abstract thinking and understanding subtlety and nuance than with mode collapse. Mode collapse affects variation, which is probably a part of of creativity for some definitions of it, but they aren't the same at all.

> Using unowned training data (e.g., celebrity faces, copyrighted art)

How would one ever know that the GenAI output is not influenced or based on copyrighted content.


I think it would be very, very difficult - almost impossible - to create a dataset to train an image generator that doesn't contain any copyrighted material that you don't have the rights to. There's the obvious stuff like Mickey Mouse or Superman, you just run some other tool over it to filter them out, but there are so many ridiculous things that can be copyrighted (depictions of buildings, tattoos), things like crowd shots, pictures of cities that have ads in the background, that I don't know how you could do it. I'm sure even Adobe's stock library would have a lot of violations like that.

Getty and Adobe offer models that were trained only on images that they have the rights to. Those models might meet Netflix’s standards?

Doesn’t seem likely that adobe has a owned collection of content big enough. Seems very likely that they just deemed the legal risk to be outweighed by commercial opportunity. They kinda had to - a product that generates stuff that gets you sued is not worth paying whatever they charge for their subscription

I kind of wonder if that even works.

If you take a model trained on Getty and ask it for Indiana Jones or Harry Potter, what does it give you? These things are popular enough that it's likely to be present in any large set of training data, either erroneously or because some specific works incorporated them in a way that was licensed or fair use for those particular works even if it isn't in general.

And then when it conjures something like that by description rather than by name, how are you any better off than something trained from random social media? It's not like you get to make unlicensed AI India Jones derivatives just because Getty has a photo of Harrison Ford.


I work in this space. In traditional diffusion-based regimes (paired image and text), one can absolutely check the text to remove all occurrences of Indiana Jones. Likewise, Adobe Stock has content moderation that ensures (up to human moderation limit) no dirty content. It is a world without Indiana Jones to the model

If you ask the Adobe stock image generation for "Adventurer with a whip and hat portrait view , Brown leather hat, jacket, close-up"

It gives you an image of Harrison Ford dressed like Indiana Jones.

https://stock.adobe.com/ca/images/adventurer-with-a-whip-and...


I don't know the data distribution, but are you sure that's generated by an Adobe model? I can only see that it is in Stock + it is tagged as AI generated (that is, was that image generated by some other model?)

Disclaimer: I used to work at Adobe GenAI. Opinions are of my own ofc.


Yeah, there's no way Indiana Jones was not in the training data that created that image. To even say it's not in there is James Clapper in front of Congress level lying.

> one can absolutely check the text to remove all occurrences of Indiana Jones

How do you handle this kind of prompt:

“Generate an image of a daring, whip-wielding archaeologist and adventurer, wearing a fedora hat and leather jacket. Here's some back-story about him: With a sharp wit and a knack for languages, he travels the globe in search of ancient artifacts, often racing against rival treasure hunters and battling supernatural forces. His adventures are filled with narrow escapes, booby traps, and encounters with historical and mythical relics. He’s equally at home in a university lecture hall as he is in a jungle temple or a desert ruin, blending academic expertise with fearless action. His journey is as much about uncovering history’s secrets as it is about confronting his own fears and personal demons.”

Try copy-pasting it in any image generation model. It looks awfully like Indiana Jones for all my attempts, yet I've not referenced Indiana Jones even once!


Emmmm sure, but throw this to a human artist who has not heard of Indiana Jones and see if they draw something alike.

It comes down to who is liable for the edge cases, I suspect. Adobe will compensate the end user if they get sued for using a Firefly-generated image (probably up to some limit).

Getting sued occasionally is a cost of doing business in some industries. It’s about risk mitigation rather than risk elimination.


Feels like "paying extra for the extended warranty" vibes. What it covers isn't much (do you expect someone to come after you in small claims court and if they do, was that your main concern?) meanwhile the big claim you're actually worried about is what it doesn't cover.

And if you really wanted insurance then why not get it from an actual insurance company?


Because almost everything is risk mitigation or reduction, not elimination.

In particular, in the US, the legal apparatus has been gamified to the point that the expectation becomes people will sue if their expected value out of it is positive even if the case is insane on its merits, because it's much more likely someone with enough risk and cost will settle as the cheaper option.

And in that world, there is nothing that completely eliminates the risk of being sued in bad faith - but the more things you put in your mitigation basket, the narrower the error bars are on the risk even if the 99.999th percentile is still the same.


All the indemnities I’ve read have clauses though that say if you intentionally use it to make something copyrighted they won’t protect you.

So if you put obviously copyrighted things in the prompt you’ll still be on your own.


Adobe Firefly absolutely has a spider man problem.

Whistleblowers, corporate leaks, output resembling copyrighted content etc. Basically it feels it's the same as the companies who unlawfully use licensed code as their own (e.g. without respecting GPL license)

Netflix could also use or provide their own TV/movie productions as training data.

Lionsgate tried that and found that even their entire archive wasn't nearly enough to produce a useful model: https://www.thewrap.com/lionsgate-runway-ai-deal-ip-model-co... and https://futurism.com/artificial-intelligence/lionsgate-movie...

This amuses me.

Consumers have long wanted a single place to access all content. Netflix was probably the closest that ever got, and even then it had regional difficulties. As competitors rose, they stopped licensing their content to netflix, and netflix is now arguably just another face in the crowd.

Now they want to go and leverage AI to produce more content and bam, stung by the same bee. No one is going to license their content for training, if the results of that training will be used in perpetuity. They will want a permanent cut. Which means they either need to support fair use, or more likely, they will all put up a big wall and suck eggs.


Maybe now all that product placement is finally coming back to haunt them.

I suspect that if GenAI starts to make content which can grab people's attention, and do it cheaply, then Netflix will become far more accommodating very quickly.

They do not want to be disrupted.


They're already disrupted.

Just look at early 20s people. They don't watch shows/movies. They only watch short form videos. Short form videos will mostly be created using GenAI tools as early as 2026.


I am thinking of building an association of AI consumers so we can organize to praise or boycott whatever we collectevily find acceptable or not. I'll spend some time reading this in details later on, but whatever it states or imply, positive or negative, it's not for businesses to set the rules as if they owned the place. Consumer associations are powerful and can't be fired when striking, since the customer is always right.

>I am thinking of building an association of AI consumers

The Gooner Association?


> it's not for businesses to set the rules as if they owned the place.

... Of course it is. As the distributor, Netflix obviously has a fairly broad ability to control what it distributes.


When you take the highway you are expected to respect a few rules but the highway builder is not supposed to tell you how to dress in your car. Theses platforms are the closest that can be to public spaces and they are managed as totally autonomous blackbox, I don't like that.

> it's not for businesses to set the rules as if they owned the place.

This is for studios and companies that are producing content for Netflix.

If you want to sell to Netflix, you have to play by Netflix's rules.

Netflix has all kinds of rules and guidelines, including which camera bodies and lenses are allowed [1].

[1] https://partnerhelp.netflixstudios.com/hc/en-us/articles/360...


If you want to sell anything as an SME almost anywhere in the world, you have to play by Google's obscure SEO rules. It doesn't make it right. I feel the same vibe with this kind of rules. Are thise guidelines negotiated with unions or federation that includes the content producers, actors, authors? If yes, is decision power equally balanced? Who writes the rules?

I know that people get very up in arms about AI in creative industries - but I feel like people don't necessarily understand that even in creative industries there is a LOT of monotonous, exploitative grunt work.

For every person who gets to make creative decision, there are hundreds upon hundreds of people whose sole purpose is slavish adherence to those decisions. Miyazaki gets to design his beautiful characters - but the task of getting those characters to screen must be carried out by massive team of illustrators for whom "creative liberty" is a liability to their career.

(And this example is only for the creative aspects of film-making. There is a lot of normal corporate and logistical stuff that never even affects what you see)

That's not to say I'm looking forward to the wave of lazy AI-infused slop that is heading our way. But I also don't necessarily agree with the grandstanding that AI is inherently anti-creative or only destructive. I reserve the right to be open-minded.

The irony is that movies and TV themselves represented a cheaper, industrialized and commoditized alternative to theater. And theater is still around and just as good as it ever was.


>For every person who gets to make creative decision, there are hundreds upon hundreds of people whose sole purpose is slavish adherence to those decisions. Miyazaki gets to design his beautiful characters - but the task of getting those characters to screen must be carried out by massive team of illustrators for whom "creative liberty" is a liability to their career.

This is vastly oversimplifying and is misleading. Key animators have a highly creative role. The small decisions in the movements, the timings, the shapes, even scene layouts (Miyazaki didn't draw every layout in The Boy and the Heron), are creative decisions that Miyazaki handpicked his staff on the basis of. Miyazaki conceived of the opening scene [0] in that film with Shinya Ohira as the animator in mind [1]. Even in his early films, when he was known to exert more control, animator Yoshinori Kanada's signature style is evident in the movements and effects [2].

[0]: https://www.sakugabooru.com/post/show/260429

[1]: https://fullfrontal.moe/takeshi-honda-the-boy-and-the-heron-...

[2]: Search for "Kanada animated many sequences of the movie, but let’s just focus on the most famous one, the air battle scene." in https://animetudes.com/2021/05/15/directing-kanada/


> For every person who gets to make creative decision, there are hundreds upon hundreds of people whose sole purpose is slavish adherence to those decisions.

Yes but at least those decisions come from some or one person not just an algorirhm


As an engineer and artist, I think a better comparison is painting -> photography. It took quite a while for photography to be considered an art, since it removed so much of the creative control from the artist. But it replaced them with new and different skills, particularly the value of curation.

Some skills, like framing, values, balance, etc. become even more important differentiators. Yes, it is much different. But as long as humans are in the loop, there is an opportunity for human communication.


>Some skills, like framing, values, balance, etc. become even more important differentiators.

I agree. I think many artists in the future will be closer to directors/cinematographers/editors than performers

Many of the skills artists have today will still be necessary and transferable, but what will separate the good artists from the bad artists will be their ability to communicate their ideas to agents / other humans

Same with software developers I suspect - communication will be the most important skill of all, and the rockstar loner devs who don't work well in teams will slowly phase out


As a software engineer you still make the hard decisions and let claude type them out for you. Isn't it similar?

I mean, yeah. No matter how you feel about AI and creativity, having AI make the creative choices is dumb and backwards.

What happens to the illustrators now?

They get to frolic on a farm upstate.

I'm curious if the parent poster thinks this is unique to film production, because I think you can make the same argument for pretty much any trade. Software engineering is 1% brilliance and 99% grunt work. This doesn't make that software engineers are going to enjoy a world where 99% of their job goes away.

Further, I'm not sure the customers will, because the fact that human labor is comparatively expensive puts some checks and balances in place. If content generation is free, the incentive is to produce higher-volume but lower-quality output, and it's a race to the bottom. In the same way, when content-farming and rage-baiting became a way to make money, all the mainstream "news" publishers converged on that.


Should we be optimising for a world that makes software engineers (or animators) in particular happy? The seen is the lost jobs but the unseen is that everyone else gets software (and animated entertainment) cheaper.

As it happens, I don't think "AI" is close to replacing many SEs or animators but in a world where it could, we should celebrate this huge boon to society.


>GenAI is not used to replace or generate new talent performances

This is 100% a lie.

Studios will use this to replace humans. In fact, the idea is for the technology – AI in general – to be so good you don't need humans anywhere in the pipeline. Like, the best thing a human could produce would only be as good as the average output of their model, except the model would be far cheaper and faster.

And... that's okay, honestly. I mean, it's a capitalism problem. I believe with all my strength that this automation is fundamentally different from the ones from back in the day. There won't be new jobs.

But the solution was never to ban technology


Eventually consumers will use the technology to replace studios.

Any studios that isn't playing ostrich has realized this (so possibly none of them) and should be just trying to extract as much value as possible as quickly as possible before everything goes belly up.

Of course timelines are still unclear. It could be 5 years or 20, but it is coming.


The part you quote is part of the list of conditions for an if-statement, so how could it be a lie?

The issue wasn't if they said that thing or not; companies say a lot of things which are fundamentally a lie, things to keep up appearances – which are oftentimes not enforced. It's like companies arguing they believe in fair pay while using Chinese sweatshops or whatever.

In this case, for instance, Netflix still has a relation with their partners that they don't want to damage at this moment, and we are not at the point of AI being able to generate a whole feature length film indistinguishable from a traditional one . Also, they might be apprehensive regarding legal risks and the copyrightability at this exact moment; big companies' lawyers are usually pretty conservative regarding taking any "risks," so they probably want to wait for the dust to settle down as far as legal precedents and the like.

Anyway, the issue here is:

"Does that statement actually reflect what Netflix truly think and that they actually believe GenAI shouldn't be used to replace or generate new talent performances?"

Because they believe in the sanctity of human authorship or whatever? And the answer is: no, no, hell no, absolutely no. That is a lie.


"Does that statement actually reflect what Netflix truly think and that they actually believe GenAI shouldn't be used to replace or generate new talent performances?"

The if-statement "If you want to do X, you need to get approval." probably does actually reflect what Netflix truly think, but it doesn't mean they believe X shouldn't be done. It means they believe X is risky and they want to be in control of whether X is done or not.

I don't see how you could read the article and come away with the impression that Netflix believe GenAI shouldn't be used to replace or generate new talent performances.


I’m inclined to agree. The goalposts will move once the time is right. I’ve already personally witnessed it happening; a company sells their AI-whatever strictly along the lines of staff augmentation and a force multiplier for employees. Not a year later and the marketing has shifted to cost optimization, efficiency, and better “uptime” over real employees.

The truth is that Netflix, Amazon, or any other company, honestly, would fire 99% of their workforce if it were possible, because they only care about profit – hell, they are companies, that's why they exist. At the same time, brands have to pretend they care about society, people having jobs, the climate, whatever, so they can't simply say: "Yeah, we exist to make money and we totally want to fire you guys as soon as possible." As you said, it's all masked as staff augmentation and other technical mumbo jumbo.

>GenAI is not used to replace or generate new talent performances

>> This is 100% a lie.

We’ve had CGI for decades and generally don’t mind. However, the point at which AI usage becomes a negative (eg: the content appears low quality) because of its usage, I’d expect some backlash and pulling back in the industry.

In film and tv, customers have so much choice. If a film or tv is low effort, it’s likely going to get low ratings.

Every business and industry is obviously incentivized to cut costs, but, if those cost cuts directly affect the reputation and imagery of your final product, you probably want to choose wisely which things you cut..


I think you're right, in general - certainly AI will replace background actors, though that's already been happening for years without AI generation. I'm also pretty sure that if/when AI can generate whole films, then that'll happen, too.

However, this statement is a hell of a lot better than I expected to see, and suggests to me that the actors' strike a few years ago was necessary and successful. It may, as you say, only be holding back the "capitalism problem" dike, but... At least it's doing that?


I would somewhat disagree with this statement being a sign the strike was a success because, like, AI is not at the point of generating a whole movie in human quality today, so Netflix issuing this statement like this now, in November 2025, costs them literally nothing, and feels more like a consolation prize: "Here, take this statement, so you guys can pretend the strike achieved anything."

When AI gets good enough, 2, 3, 5, 10 years from now, they simply reverse path, and this statement wouldn't delay Netflix embracing AI films that much, if anything.

> I would somewhat disagree with this statement being a sign the strike was a success because, like, AI is not at the point of generating a whole movie in human quality today, so Netflix issuing this statement like this now, in November 2025, costs them literally nothing, and feels more like a consolation prize: "Here, take this statement, so you guys can pretend the strike achieved anything."

>

> When AI gets good enough, 2, 3, 5, 10 years from now, they simply reverse path, and this statement wouldn't have delay Netflix embracing AI films that much, if anything.


There’s no guarantee AI will get good enough to replace anyone. We’ve pretty much run out of training data at this point. I’m a little annoyed that people speak about future progress like it’s an inevitability.

You’re saying their statement about what is happening is a lie because of what you predict will happen…

So they admit it. They don’t make movies they produce content.

Is be very surprised if Netflix doesn’t go all in on slop given their recent catalogue

Netflix joins everyone else jumping on the "rules for thee, but not for me" train.

Driving up energy costs for all, to generate AI slop and put workers in their place!

Worth reading alongside: Equity’s GDPR FAQ.

https://www.equity.org.uk/advice-and-support/know-your-right...

Common-sense, practical, and covers a lot of the shifting ground around an artist’s ability to withdraw consent under GDPR and the ways they can properly use this to prevent their likenesses being used to train their digital replacements.

(Equity is the UK equivalent of the AEA and SAG-AFTRA combined)


It's interesting that they don't explicitly state the fact that AI-generated content cannot be copyrighted. They seem to dance around that. The provision against generating a major character is about respect for talent and so on, rather than the fact that that would make the major character public domain and therefore able to be used by anyone for anything.

I wonder if we're going to see a push back by media companies around copyright over AI-generated content. Though I don't see how; copyright is explicitly an artificial legal protection of human works.


Netflix is basically strangling the creative potential of GenAI before it can even breathe. Their new “guidelines” read like a corporate legal panic document, not a policy for innovation. Every use case needs escalation, approval, or a lawyer’s blessing. That’s not how creativity works.

The irony is rich they built their empire on disrupting old Hollywood gatekeeping, and now they’re recreating it in AI form. Instead of letting creators experiment freely with these tools, Netflix wants control over every brushstroke of ai creativity


Thankfully GenAI has no creative potential so we aren’t losing much.

I do agree Netflix wants to crush creators.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: