I for one am glad that, for once in my life, an obviously huge advancement is taking into account the human impact of releasing the technology responsibly. Maybe it’s just a “BS statement” but given the major strides they’re making at removing racial/gender biases[1] from similar projects, I don’t think it’s just hot air. Especially given the phenomenon of “bias amplification”.
Maybe I’m being too optimistic but either way, given the pace of progress we won’t have to wait very long to play with this magic.
That’s a pretty uncharitable interpretation of my post. You shared an example of a single mitigation that you personally find ludicrous (without explaining why). And then I’m supposed to throw up my hands and go “I guess it’s pointless to try and be less racist”?
Tay might still be around if Microsoft gave a thought to potential issues before release. I’d prefer not to have this awesome technology tainted out of the gate as a tool for racists and pornographers. They’ll get their hands on it eventually but it’d be nice if they don’t get all the up-front press.
That is the only mitigation used in DALL-E 2, which up until recently was the only publicly available text to image model.
> I’d prefer not to have this awesome technology tainted out of the gate as a tool for racists and pornographers
Why is it your business what people do with the model? If people want to be racist they can already do so, they don't need a shitty model that doesn't work half as well as paying some guy in the third world $2/h to shitpost online. And I don't see the problem with pornography.
Everything has a “bias” including reality.
This type of talk is not about removing biases, it is about imposing biases in line with the wokeligion and attempting to manipulate public view away from the facts of reality. But reality always wins in the end.
Indeed, you can't be trusted not do anything bad with it. Who's going to vet each and every user? Who's going to check every time it's used for the coming 10, 20, 30 years?
If he's racist he can already hire black people to take a photo for an "upcoming action movie" called evil baby. They will be asked to hold guns aiming it at a crib. Then release it on the internet and say he saw 3 black people about to shoot a baby.
It would be called out as fake or staged just like an imagen/walle2/openai would be called out as fake. The thing that makes stories real is real people - actual events and backed up testimonies.
If he wants a picture of a dog with sunglasses in a boat, he can hire a dog, a handler, and a boat. He doesn't need this model for anything.
> The thing that makes stories real
Since when do the hordes care about stories being real? People get harassed over photoshopped images. People get killed over false rumors. These tools make it easier to trigger such reactions.
Maybe I’m being too optimistic but either way, given the pace of progress we won’t have to wait very long to play with this magic.
[1] https://openai.com/blog/reducing-bias-and-improving-safety-i...