Hacker News new | past | comments | ask | show | jobs | submit login

have you been able to contact OpenAI about this? It sounds like they're actively adding load to your CS ops with this



I think the key thing is for the AI company to actually let the user know that this is a language model, and the information it spits out should not be trusted. Obviously, Microsoft is not going to do that as they are trying to market the new bing as a information search engine.


OpenAI does its best to make it clear that it is just a language model, but what can you do with you have users that just instantly click "Agree, Agree, Next, Next, Agree"


clearly not best enough


what are they going to do? add custom logic? where does it stop?

the malady is that LLMs cannot do operational adhoc changes such as these kinds of errors at scale


They absolutely do add custom logic for a lot of stuff. This has the side effect of neutering the functionality in some areas just to chastise the user for any perceived improper use of services.


Well, we can argue such changes are necessary. Just like Google Search is required to remove/hide some search results (based on regional juristrictions). Is that similar to censorship, or copyright law, or spreading fake information? I do see the counter-argument, too, where AI tools should just be tools and users should learn how they work ("don't believe everything this tool outputs").


They've already added custom logic to prevent their LLM from e.g. praising Nazis or whatever restrictions people are upset about -- seems it'd be easy to configure the software to exclude references to known unavailable services.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: