> This just creates a moral hazard to not disclose the use of any AI-generated assets
The whole space is somewhat amusing to me. what is the bigger moral hazard: Openly disclosing everything about your content pipeline and getting your team's efforts shitcanned, or keeping everything private unless a court order shows up?
It has been widely speculated that the primary reason OpenAI never disclosed the full training dataset for GPT-3 or GPT-4 was to avoid potential legal backlash.
I prefer to think of it as the Uber/AirBnB model. Just do illegal things so much that you clog the enforcement mechanisms. Then it becomes such an unreasonable burden that they change the laws in your favor.
The whole space is somewhat amusing to me. what is the bigger moral hazard: Openly disclosing everything about your content pipeline and getting your team's efforts shitcanned, or keeping everything private unless a court order shows up?