Not surprising, just like when MS went to shit, and then they start to embrace 'open source'. Seems like PR stunt. And when it comes to LLM there is millions of dollar barrier to entry to train the model, so it is ok to open up their embedding etc.
Today big corp A will open up a little to court the developers, and tomorrow when it gains dominance it will close up, and corp B open up a little.
OpenAI is heavily influenced by big-R Rationalists, who fear the issues of misaligned AI being given power to do bad things.
When they were first talking about this, lots of people ignored this by saying "let's just keep the AI in a box", and even last year it was "what's so hard about an off switch?".
The problem with any model you can just download and run is that some complete idiot will do that and just give the AI agency they shouldn't have. Fortunately, for now the models are more of a threat to their users than anyone else — lawyers who use it to do lawyering without checking the results losing their law licence, etc.
But that doesn't mean open models are not a threat to other people besides their users, as all the artists complaining about losing work due to Stable Diffusion, the law enforcement people concerned about illegal porn, election interference specialists worried about propaganda, and anyone trying to use a search engine, and that research lab that found a huge number of novel nerve agent candidates whose precursors aren't all listed as dual use, will all tell you for different reasons.
> Fortunately, for now the models are more of a threat to their users than anyone else
Models have access to users, users have access to dangerous stuff. Seems like we are already vulnerable.
The AI splits a task in two parts, and gets two people to execute each part without knowing the effect. This was a scenario in one of Asimov's robot novels, but the roles were reversed.
AI models exposed to public at large is a huge security hole. We got to live with the consequences, no turning back now.
My impression is that OpenAI was founded by true believers, with the best intentions; whose hopes were ultimately sidelined in the inexorable crush of business and finance.
You can run Gemma and hundreds of other models(many fine-tuned) in llama.cpp. It's easy to swap to a different model.
It's important there are companies publishing models(running locally). If some stop and others are born, it's ok. The worst thing that could happen is having AI only in the cloud.
> And when it comes to LLM there is millions of dollar barrier to entry to train the model, so it is ok to open up their embedding etc.
That barrier is the first basic moat; hundreds of millions of dollars needed to train a better model. Eliminating tons of companies and reducing it to a handful.
The second moat is the ownership of the tons of data to train the models on.
The third is the hardware and data centers setup to create the model in a reasonable amount of time faster than others.
Put together all three and you have Meta, Google, Apple and Microsoft.
The last is the silicon product. Nvidia which has >80pc of the entire GPU market and being the #1 AI shovel maker for both inference and training.
Eh, I don't really blame anyone for being cynical but open weight AI model releases seem like a pretty clear mutual benefit for Google. PR aside, they also can push people to try these models on TPUs and the like. If anything, this seems like it's just one of those things where people win because of competition. OpenAI going closed may have felt like the most obvious betrayal ever, but OTOH anyone whose best interests are to eat their lunch have an incentive to push actually-open AI, and that's a lot of parties.
Seems like anyone who is releasing open weight models today could close it up any day, but at least while competition is hot among wealthy companies, we're going to have a lot of nice things.
Today big corp A will open up a little to court the developers, and tomorrow when it gains dominance it will close up, and corp B open up a little.