I think the key thing is for the AI company to actually let the user know that this is a language model, and the information it spits out should not be trusted. Obviously, Microsoft is not going to do that as they are trying to market the new bing as a information search engine.
OpenAI does its best to make it clear that it is just a language model, but what can you do with you have users that just instantly click "Agree, Agree, Next, Next, Agree"
They absolutely do add custom logic for a lot of stuff. This has the side effect of neutering the functionality in some areas just to chastise the user for any perceived improper use of services.
Well, we can argue such changes are necessary. Just like Google Search is required to remove/hide some search results (based on regional juristrictions). Is that similar to censorship, or copyright law, or spreading fake information? I do see the counter-argument, too, where AI tools should just be tools and users should learn how they work ("don't believe everything this tool outputs").
They've already added custom logic to prevent their LLM from e.g. praising Nazis or whatever restrictions people are upset about -- seems it'd be easy to configure the software to exclude references to known unavailable services.