The long context length is of course incredible, but I'm more shocked that the Pro model is now on par with Ultra (~GPT-4, at least the original release). That implies when they release 1.5 Ultra, we'll finally have a GPT-4 killer. And assuming that 1.5 Pro is priced similarly to the current Pro, that's a 4x price advantage per-token.
Not surprising that OpenAI shipped a blog post today about their video generation — I think they're feeling considerable heat right now.
Ultra benchmarked around the original release of GPT-4, not the current model. My understanding is that was fairly accurate — it's close to current GPT-4 but not quite equal. However, close-to-GPT-4 but 4x cheaper and 10x context length would be very impressive and IMO useful.
Feeling the heat? Did you actually watch the videos?
That was a huge leap forward compared to anything existing at the moment.
Order of magnitudes away from a blog post discussing a model that maybe will finally be on par with chat gtp 4...
There was Sam Altman taking live prompt requests on twitter and generating videos.
They were not the same quality as some of the ones in the website, but they were still incredibly impressive.
Not surprising that OpenAI shipped a blog post today about their video generation — I think they're feeling considerable heat right now.