Not necessarily true anymore. As a counterexample, consider CodeLlama 33b, which is quite good (and which has replaced GPT-4 for my coding assistant needs).
OpenAI's models are likely to remain the best, but I see open models catching up and becoming "good enough." Why pay for GPT-4 when I can run a model locally for free? (Barring the initial capital cost of a GPU; and not even this if you're, say, using a Macbook)
OpenAI's models are likely to remain the best, but I see open models catching up and becoming "good enough." Why pay for GPT-4 when I can run a model locally for free? (Barring the initial capital cost of a GPU; and not even this if you're, say, using a Macbook)