See that's the thing about content creation— you entirely lose the "it's just a tool and tools are inherently neutral" argument. Nobody's claiming that the implementor created deliberately racist content, but you can't merely disavow responsibility for the content you generate because the input and algorithms you chose were the root of the problem. Like it or not, what's included in the source data for these models is an editorial choice, and which model you use and how you use it are also editorial choices.
Actually I think “it’s a reflection of society’s bias” is a totally reasonable statement to make if you’re product is a reflection of societies generated content.
Rather than impugning the model makers for not curating societies content to erase its biases, to my mind it demonstrates what’s broken with society, and should be used as an indictment of how we encode our society in our media - the fact that if you asked an oracle a question it produces racist output is more an indictment of society itself because the media of our society IS racist. Sweeping it under the rug serves no one, IMO.
Instead the story is “AI models are racist,” which misses the real problem. The real problem is that when a human in our society wants to portray a robber, they use a black man. That should be the story, and pitching it as a flaw with AI models is like criticizing the color of the paint when the foundation is cracked.
It's not perpetuating stereotypes, it's showing that these stereotypes exist in society. Similar to how a comedian might point out the absurdity of racism by writing and delivering a joke about race. Or how a child might ask why people of certain skin tones tend to have different hair styles as well. Neither the child nor the comedian are definitely racist, but they are making observations that may not be considered politically correct.
The fact we, and many people, are having these conversations wouldn’t happen if they curated societies biases out of their product.
Saying their product perpetuates stereotypes but ignoring its reflecting the entirety of medias bias is ignoring that all media perpetuates these stereotypes, and their models are no more or less perpetuating of these stereotypes than literally the entirety of all media. The fact they hold it up for careful examination in an irrefutable way is a feature in my mind, not a flaw.
I would note that SD produces a base model, which can and is routinely fine tuned. I would rather see a fine tuning that eliminates the bias in the media than see a base model that is fundamentally divergent from the state of the media today. That’s the proper abstraction - a base model that’s the basic output from mass training on available media, and models that specialize for some curation. But I also object to the base models being censored. The reason why is it cuts off a base truth from the semantic models underlying such that it’s outputs are at odds with observable reality. Specialized models shouldn’t be doing things like un censoring, but adjusting base truth to curated views. “Unbiased model” should be a fine tuning of reality. “Safe for work model” should be a fine tuning of reality. The challenge is the model producers don’t trust the model users to be adults making adult decisions and thinking adult thoughts - including about biases, stereotypes, etc.
But regardless, I think base models should never be thought of as final products but as a basis to produce a final product.
> Saying their product perpetuates stereotypes but ignoring its reflecting the entirety of medias bias is ignoring that all media perpetuates these stereotypes
Not being further up on a hierarchy of many bad actors doesn't absolve you from responsibility. What goes into these models is an editorial decision, as is which one to use in your software. This author isn't distributing models and they aren't being held to task over the model's content— they're generating images from those models and distributing them, and those images are what bother people. If they used the same model but somehow didn't get objectionable results, nobody would know, let alone care.
You say the media is to blame? Sure. When you start generating content with a particular perspective, you are media.
Snakes are, at times, quite dangerous. Granted, not all snakes, but in terms of being risk-averse, and unaware of which snakes are quite possibly deadly, am I not, at least in my current ignorant state, best served to avoid all snakes?
Feel free to substitute "snakes" with any existing stereotype. I personally am a fan of "people who eat pizza with pineapple".
There's a whole lot of information out there about the harm caused by stereotypes. Normally I tell people to look it up because it's not hard and I'm not your research assistant, but here's a freebie: "Are Greg and Emily More Employable Than Lakisha and Jamal" is a mid-aughts study in which the authors sent out 5k job applications using fabricated equally-weighted resumes that were randomly assigned a stereotypically Black American or stereotypically white American name. Resumes attached to 'white' sounding names were fifty percent more likely to recieve a callback. Would you consider that a reasonable pan-industry attempt by employers to protect themselves from harm based entirely on someone's name?
Your mistake is you're only thinking about it from the perspective of the stereotyper. Now think about the negative impact (career, legal system, etc) on the person being incorrectly stereotyped.