Hacker News new | past | comments | ask | show | jobs | submit login

This would be so much better if we knew the description fed into the "AI".



My best (frankly uneducated) guess would be that they trained the model with famous movie posters as the objective, and still frames of the movie as the training data. Then they gave frames of these movies to the ai to get a poster out.


I don't think they look as if movie posters were the training objective. No texts, no large faces of leading actors, wrong aspect ratio, unusual colour palette.


Oh, so image data? I got the impression they used a text-only description as input.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: