Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The long context length is of course incredible, but I'm more shocked that the Pro model is now on par with Ultra (~GPT-4, at least the original release). That implies when they release 1.5 Ultra, we'll finally have a GPT-4 killer. And assuming that 1.5 Pro is priced similarly to the current Pro, that's a 4x price advantage per-token.

Not surprising that OpenAI shipped a blog post today about their video generation — I think they're feeling considerable heat right now.



Gemini 1 Ultra was also said to be on par with ChatGPT 4 and it's not really there so let's see for ourselves when we can get our hands on it.


Ultra benchmarked around the original release of GPT-4, not the current model. My understanding is that was fairly accurate — it's close to current GPT-4 but not quite equal. However, close-to-GPT-4 but 4x cheaper and 10x context length would be very impressive and IMO useful.


No, it benchmarked around the original release of GPT-4 given 32 attempts versus GPT-4's 5.


Feeling the heat? Did you actually watch the videos? That was a huge leap forward compared to anything existing at the moment. Order of magnitudes away from a blog post discussing a model that maybe will finally be on par with chat gtp 4...


The openai announcement is also more or less a blog post, isn't it?

Do we know how much time or money does it take to create a movie clip?


There was Sam Altman taking live prompt requests on twitter and generating videos. They were not the same quality as some of the ones in the website, but they were still incredibly impressive.


And how much compute were those requests using?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: