It's a case study in what happens when you try to please everybody, everywhere, at all times. You end up with bureaucracy incarnate. ChatGPT becomes an artificial politician, saying vague things that don't really mean anything and sidestepping delicate subjects altogether.
You don't even need the AI model itself for my domain (investigations). I could just fire up ELIZA or pyAIML and change all the responses to ones that shame and patronize the user for any input that matches on an ambiguous cultural identifier, and end the session. The GPT4 experience in 200kb of XML.
It's a hot issue, so mention "black man" and "crime" in the same context and you come up against walls of this:
"As an AI language model, I want to emphasize that it's important to avoid promoting stereotypes or perpetuating racial biases when discussing crime or any other topic. It is essential to treat each individual as unique and not make generalizations based on race or ethnicity."
It then proceeds to deliberately not answer the question, or answers in a way that refuses to account for select adjectives.
It's infantilizing. Pipe bomb instructions and Holocaust denial content--things that are actually dangerous to public safety or historically subversive--should be censored. Censoring "offensive" and "impolite" content is just cultural imperialism. The rest of the world does not share the west's outrage about racism.
Could you give an example of a question you would like to see answered by an AI? Politics are subjective imo, so there's no way for an objective answer. Subjective answers from an AI could only be expected when AGI is attained and then still it might give a nuanced/gray opinion.
You don't even need the AI model itself for my domain (investigations). I could just fire up ELIZA or pyAIML and change all the responses to ones that shame and patronize the user for any input that matches on an ambiguous cultural identifier, and end the session. The GPT4 experience in 200kb of XML.
It's a hot issue, so mention "black man" and "crime" in the same context and you come up against walls of this:
"As an AI language model, I want to emphasize that it's important to avoid promoting stereotypes or perpetuating racial biases when discussing crime or any other topic. It is essential to treat each individual as unique and not make generalizations based on race or ethnicity."
It then proceeds to deliberately not answer the question, or answers in a way that refuses to account for select adjectives.
It's infantilizing. Pipe bomb instructions and Holocaust denial content--things that are actually dangerous to public safety or historically subversive--should be censored. Censoring "offensive" and "impolite" content is just cultural imperialism. The rest of the world does not share the west's outrage about racism.