Hacker News new | past | comments | ask | show | jobs | submit login

> This just creates a moral hazard to not disclose the use of any AI-generated assets

The whole space is somewhat amusing to me. what is the bigger moral hazard: Openly disclosing everything about your content pipeline and getting your team's efforts shitcanned, or keeping everything private unless a court order shows up?




No one is being protected from consequences of risky behavior. Moral hazard doesn't apply.


So, the OpenAI model?


It has been widely speculated that the primary reason OpenAI never disclosed the full training dataset for GPT-3 or GPT-4 was to avoid potential legal backlash.


I prefer to think of it as the Uber/AirBnB model. Just do illegal things so much that you clog the enforcement mechanisms. Then it becomes such an unreasonable burden that they change the laws in your favor.


Classic VC bullshit.


And GitHub Copilot


That's OpenAI.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: