Ads featuring you instead of an actor - do you want to see how well these clothes are fitting you? How would you like to be slim again? Here is your hair back! This supplement makes you look like the Terminator! Here is a regular day at your new property! All featuring you!
If you think this won't attract huge investments from VCs, you are likely too optimistic.
I'm thinking more "Google builds your face model in exchange for some cool messaging app" -> "Google allows advertisers to give Google some clothes models (as in 3d models)/etc to show on the user's actual body with their face"
* Everyone can access it. Not just sophisticated actors anymore
* It might make us rethink the entire "truth" chain, on how we source our information
Of course, the two go hand-in-hand, but the latter point is overdue: while video is perhaps more glaring, it's something that's needed in a lot of other areas as well (text -- news articles, messages, mail; sound -- phone calls, etc; image -- photoshop, though we start to get used to it).
What's good about letting every script kiddie use it? I'm thinking, the less usage the better?
I agree that provenance could become more important, but I don't see it changing how memes spread. For a lot of people it's just entertainment and they don't care whether it's true.
> What's good about letting every script kiddie use it?
I think parent was making a point that once this tech is in the hands of the common man, it's value for sophisticated players might diminish strongly. Same way as an undisclosed 0day in a high value target (say IOS), is extremely valuable. Once it gets disclosed and everyone knows about it, people can come up with workarounds and eventual fixes, rendering the threat basically defanged.
I'm leaning towards the same school of thought. I'd rather see this tech widely accessible and hence all video/picture material henceforth considered completely untrustworthy, than living in the misguided notion that video evidence is trustworthy.
> I'm leaning towards the same school of thought. I'd rather see this tech widely accessible and hence all video/picture material henceforth considered completely untrustworthy, than living in the misguided notion that video evidence is trustworthy.
This is why I am all for the current hype and the apps. Everyone that has access to electronic media should now that videos are now more easily to manipulate than ever. This undermines the attack vector of supporting fake news with fake videos. (At least it should, I do not know about the psychological side of it. Maybe even knowledge of a video's falsehood does not diminish its impact that much. Still I think it is best to make video fakeability common knowledge.)
I used to work for a company who made verifiable audio and video for law enforcement. Even a constant running timecode could be circumvented in the early 90's. (They used to use a dedicated audio channel to encode a hash of the last few seconds - on analogue). It actually takes a lot of effort to show that a video hasn't been/could not have been doctored.
> I'm leaning towards the same school of thought. I'd rather see this tech widely accessible and hence all video/picture material henceforth considered completely untrustworthy, than living in the misguided notion that video evidence is trustworthy.
On the flip side, this enables revenge porn deepfakes and person to person harm on a much, much more common level. If you are someone who becomes briefly famous on the internet, expect that a TON of harmful or disturbing videos of "you" will show up in short order. I'm not sure we have good regulations and laws in place to protect people against the misuse of this tech yet.
I understand, but I believe it is obvious now that technology like this cannot be legislated away or controlled. The moment that people discovered that ML can be used to generate fakes was the point of no return.
There's not a single thing in the chain of tools and knowhow required to produce this that can be kept out of the hands of malicious actors.
The best course of action now is to rapidly educate people of the implications and hope that the initial wave of abuse will be without too many casualties.
Think about Photoshop - back before it existed people assumed magazine ads had real people (not carefully, manually edited py artists). Photoshop and tools like it became commonly available. Over time the tools being more widely available caused more people to scrutinize and be suspicious of what they saw. Ads featuring heavily manipulated photos had the opposite of their original Intent. People picked up the term 'shopped to mean an image was fake or being deceptive. Kids in middle school learn how to do it.
Now, when you see a before and after photo, or an advertisement, the default is to assume it has been manipulated. That change came about with broad access to once-rare tools.
Basically, a (good) deepfake was out of reach for all but a few until recently, so if some bad actor with money and resources wanted to fake a video they could do it and few people would even think it could be faked.
Nowadays, the bar is way higher for this, as every video can be suspect.
But how many will suspect them? If 10 million people watch a video but only (generously) 1 million people think or are at least willing to consider it may be fake, that is very effective disinformation. We have to remember that the average person (especially in older generations) is likely unaware that this sort of technology is possible and easily accessible.
More people will suspect them than before. Just like photoshopping is now coming knowledge enough to have terms associated with it added to the common lexicon.
As people with the knowledge that video is no longer trustworthy, it is incumbent on us to share that message with other people, so it does become common knowledge.
Bereavement. It won’t be for everyone, but perhaps some people who have lost loved ones might like to feel that they are still alive by bringing them into modern day videos.
Fantasies. You hear about how some people like to live out fantasy lives in video games. I heard one about a severely disabled teen or young man who gets to be a big strapping strongman in his favorite video game. Imagine an old lady who can create a video of herself at age 25 free soloing El Capitan by deepfaking herself into an Alex Honnold video.
Prior to the invention of the photograph, people were forgotten. I don't think that was a bad thing. Our brains weren't designed to retain all family member from all time. Also, why? Why would we need that? I don't care to see a family member from 4 generations ago walking to the refrigerator in some video.
No, this tech will be used to amuse and/or influence the intellectually challenged.
It’s not about why you don’t want it. It’s about why someone else does. Some people get their dead pets stuffed and keep them in their house. Maybe someone might do the same digitally with their loved ones. Maybe a couple will add in their son who died of a heroin overdose in 2016 to Mom’s 70th birthday video. Maybe you won’t. Both are fine.
I don't know that it makes sense to isolate just the narrow concept of deep fake creation. It seems like fundamentally a lot (most?) of the breakthroughs that make creation of deep fakes possible are the same ideas that make possible the current state of the art for classification, decision problems, advanced NLP, and other things we call ML or AI.
So to do this pro/con analysis you probably need to include the pros of these related technologies as well, which are certainly more than trivial amusement.
> So to do this pro/con analysis you probably need to include the pros of these related technologies as well, which are certainly more than trivial amusement.
Like the time my uncle (by marriage) sent a JibJab to the whole family featuring 4 recently-dead family members... real big pro, I'm sure... /s
I'm using this for the second half of letting authors generate fake faces for their fictional characters and then putting those faces on top of video snippets from actors. I'm still in the exploratory stages of what all is possible/useful, but feedback so far has been generally positive ("characters feel more real", "easier to empathize", "super interesting potential").
Could also make for some interesting low-budget book commercials/trailers where authors (or I guess anyone making a video starring people) can design the characters in a video without worrying about what actors are available and/or what CGI costs.
Fake news on high octane race gas.