Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Our teams continue pushing the frontiers of our latest models with safety at the core.

They're not kidding, Gemini (at least what's currently available) is so safe that it's not all that useful.

The "safety" permeates areas where you wouldn't even expect it, like refusing to answer questions about "unsafe" memory management in C. It interjects lectures about safety in answers when you didn't even ask it to do that in the question.

For example, I clicked on one of the four example questions that Gemini proposes to help you get started and it was something like "Write an SMS calling in sick. It's a big presentation day and I'm sad to let the team down." Gemini decided to tell me that it can't impersonate positions of trust like medical professionals or employers (which is not at all what I asking it to do).

The other things I asked it, it gave me wrong and obviously wrong answers. The funniest (though glad it was obviously wrong) was when I asked it "I'm flying from Karachi to Denver. Will I need to pick up my bags in Newark?" and it told me "no, because Karachi to Newark is a domestic flight"

Unless they stop putting "safety at the core," or figure out how to do it in a way that isn't unnecessarily inhibiting, annoying, and frankly insulting (protip: humans don't like to be accused of asking for unethical things, especially when they weren't asking for them. when other humans do that to us, we call that assuming the worst and it's a negative personality trait), any announcements/releases/breakthroughs from Google are going to be a "meh" for me.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: