Hacker News new | past | comments | ask | show | jobs | submit login

> Yeah, I mean that's why we're both here and why we're discussing this very topic, right? :D

That wasn't specifically directed at "you", but more as a plea to everyone reading that comment ;)

I looked at a few benchmarks, comparing the two, which like in the case of Opus 3 vs Sonnet 3.5 is hard, as the benchmarks the wider community is interested in shifts over time. I think this page[0] provides the best overview I can link to.

Yes, GPT4 is better in the MMLU benchmark, but in all other benchmarks and the LMSys Chatbot Arena scores[1], GPT4o-mini comes out ahead. Overall, the margin between is so thin that it falls under my definition of "on par". I think OpenAI is generally a bit more conservative with the messaging here (which is understandable), and they only advertise a model as "more capable", if one model beats the other one in every benchmark they track, which AFAIK is the case when it comes to 4o mini vs 3.5 Turbo.

[0]: https://context.ai/compare/gpt-4o-mini/gpt-4

[1]: https://artificialanalysis.ai/models?models_selected=gpt-4o-...




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: