Thinking aloud, but couldn't someone create a website with some malicious text that, when quoted in a prompt, convinces the LLM to expose certain private data to the web page, and couldn't the webpage send that data to a third party, without the need for the LLM to do so?
This is probably possible to mitigate, but I fear what people more creative, motivated and technically adept could come up with.
This is probably possible to mitigate, but I fear what people more creative, motivated and technically adept could come up with.