Hacker News new | past | comments | ask | show | jobs | submit login

Gemini. I first asked it to tell me about the Heian period (which it got correct) but then it generated images and seemed to craft the rest of the chat to fit that narrative.

I mean, just asking it for a "samurai" from the period will give you this:

https://g.co/gemini/share/ba324bd98d9b

>A non-binary Indigenous American samurai

It seems to recognize it's mistakes if you confront it though. The more I mess with it the more I get "I'm afraid I can't do that, Dave" responses.

But yea. Seems like if it makes an image, it goes off the rails.




It's funny how they introduced a clear US-centric bias while trying to push for more diversity.


It's ironic that even the cultural left in US is not immune to American exceptionalism.


"diversity" is only ever code for "the sensibilities of a certain set of Californians".


Got it. I asked it a series of text questions about the period and it didn't put in anything obviously laughable (including when I drilled down into specific questions about the population, gender roles, and ethnicity). Maybe it's the image creation that throws it into lala land.


I think so too. I could be wrong but I believe once it generates an image it tries to work with it. Crazy how it seems the "text" model knows how wildly wrong it is but the image model just does its thing. I asked it why it generated a native American and it ironically said "I can't generate an image of a native american samurai because that would be offensive"


I suspect that in the case of the image model, they directly modify your prompt and in the case of the text model they don't.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: