Hacker News new | past | comments | ask | show | jobs | submit login

> It appears they've rolled out some kind of bug fix which looks at the answers they've just printed to the screen separately, perhaps as part of a new Bing session with no memory, to decide whether they look acceptable

This has been around for at least a few days. If Sydney composes an answer that it doesn't agree with, it deletes it. The similar experience can be seen in ChatGPT, where it will start highlighting an answer in orange if it violates OpenAI's content guidelines.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: