> It appears they've rolled out some kind of bug fix which looks at the answers they've just printed to the screen separately, perhaps as part of a new Bing session with no memory, to decide whether they look acceptable
This has been around for at least a few days. If Sydney composes an answer that it doesn't agree with, it deletes it. The similar experience can be seen in ChatGPT, where it will start highlighting an answer in orange if it violates OpenAI's content guidelines.
This has been around for at least a few days. If Sydney composes an answer that it doesn't agree with, it deletes it. The similar experience can be seen in ChatGPT, where it will start highlighting an answer in orange if it violates OpenAI's content guidelines.