As we build and understand them now there's pretty good structural reasons to believe that LLMs cannot be tweaked or tuned to be more than incidentally truthful.
If you use an LLM only solution, sure, but there are many more opportunities if you expand the system to include more.
We could, for example, use the existing tools for copyright detection, on the output and refuse to send it to the client side. It's just another moderation check in that perspective
Copyright protection software is notably bad because it requires human context and decisions to decide infringement from fair use. Say we ignore that part that's the lowest hanging fruit on the improving generative models tree. Deciding if something is true to have the bot not say output it if it isn't is waaaay harder and the thing needed to fix the problem we've actually been talking about.
I was more trying to add an interesting philosophical perspective than to comment on this particular instance