I suspect it is its own model. Running it on 10B+ user queries per day you're gonna want to optimize everything you can about it - so you'd want something really optimized to the exact problem rather than using a general purpose model with careful prompting.
Wonder if they'll eventually release Whisper support. Groq has been great for transcribing 1hr+ calls at a significnatly lower price compared to OpenAI ($0.36/hr vs. $0.04/hr).
Does it run well on CPU? I've used it locally but only with my high end (consumer/gaming) GPU, and haven't got round to finding out how it does on weaker machines.
That's pretty much exactly how I started. Ran whisper.cpp locally for a while on a 3070Ti. It worked quite well when n=1.
For our use case, we may get 1 audio file at a time, we may get 10. Of course queuing them is possible but we decided to prioritize speed & reliability over self hosting.
Cerebras really has impressed me with their technicality and their approach in the modern LLM era. I hope they do well, as I've heard they are en-route to IPO. It will be interesting to see if they can make a dent vs NVIDIA and other players in this space.
You don't need quantization aware training on larger models. 4 bit 70b and 405b models exhibit close to zero degradation in output with post training quantization[1][2].
Probably because of how bloody large they are. The quantization errors likely cancel each other out over the sum of so many terms.
Same reason why you can get a pretty good reconstruction when you add random noise to an image and then apply a binary threshold function to it. The more pixels there are, the more recognizable will be the B&W reconstruction.
If you're using an LLM as a compressed version of a search index, you'll be constantly fighting hallucinations. Respectfully, you're not thinking big-picture enough.
There are LLMs today that are amazing at coding, and when you allow it to iterate (eg. respond to compiler errors), the quality is pretty impressive. If you can run an LLM 3x faster, you can enable a much bigger feedback loop in the same period of time.
There are efforts to enable LLMs to "think" by using Chain-of-thought, where the LLM writes out reasoning in a "proof" style list of steps. Sometimes, like with a person, they'd reach a dead-end logic wise. If you can run 3x faster, you can start to run the "thought chain" as more of a "tree" where the logic is critiqued and adapted, and where many different solutions can be tried. This can all happen in parallel (well, each sub-branch).
Then there are "agent" use cases, where an LLM has to take actions on its own in response to real-world situations. Speed really impacts user-perception of quality.
> There are LLMs today that are amazing at coding, and when you allow it to iterate (eg. respond to compiler errors), the quality is pretty impressive. If you can run an LLM 3x faster, you can enable a much bigger feedback loop in the same period of time.
Well now the compiler is the bottleneck isn't it? And you would still need human check for bugs that aren't caught by the compiler.
Still nice to have inference speed improvements tho.
Something will always be the bottleneck, and it probably won’t be the speed of electrons for a while ;)
Some compilers (go) are faster than others (javac) and some languages are interpreted and can only be checked through tests. Moving the bottleneck from AI code gen step to the same bottleneck as a person seems like a win.
And yet it takes a non-zero amount of time. I think an apt comparison is a language like C++ vs Python. Yea, technically you can write the same logic in both, but you can't genuinely say that "spelling out the code" takes the same amount of time in each. It becomes a meaningful difference across weeks of work.
With LLM-pair-programing, you can basically say "add a button to this widget that calls this callback" or "call this API with the result of this operation", and the LLM will spit out code that does that thing. If your change is entirely within 1-2 files, and < 300 LOC, in a few seconds, and it can be in your IDE, probably syntactically correct.
It's human-driven, and the LLM just handles the writing. The LLM isn't doing large refactors, nor is it designing scalable systems on its own. A human is doing that still. But it does speed up the process noticeably.
If the speed is used to get better quality with no more input from the user then sure, that is great. But that is not the only way to get better quality (though I agree that there are some low hanging fruit in the area).
To be honest most LLM's are reasonable at coding, they're not great.
Sure they can code small stuff.
But the can't refactor large software projects, or upgrade them.
Upgrading large java projects is exactly what AWS want you to believe their tooling can do, but the ergonomics aren't great.
I think most of the capability problems with coding agents aren't the AI itself, it's that we haven't cracked how to let them interact with the codebase effectively yet. When I refactor something, I'm not doing it all at once, it's a step by step process. None of the individual steps are that complicated. Translating that over to an agent feels like we just haven't got the right harness yet.
Honestly, most software tasks aren’t refactoring large projects, so it’s probably OK.
As the world gets more internet connected and more online, we’ll have an ever expanding list of “small stuff” - glue code that mixes and ever growing list of data sources/sinks and visualizations together. Many of which are “write once” and leave running.
Big companies (eg google) have built complex build systems (eg bazel ) to isolate small reusable libraries within in a larger repo. Which was a necessity to help unbelievably large development teams to manage a shared repository. An LLM acting in its small corner of the wold seems well suited to this sort of tooling, even if it can’t refactor large projects spanning large changes.
I suspect we’ll develop even more abstractions and layers to isolate LLMs and their knowledge of the wold. We already have containers and orchestration enabling “serverless” applications, and embedded webviews for GUIs.
Think about ChatGPT and their python interpreter or Claude and their web view. They all come with nice harnesses to support a boilerplate-free playground for short bits of code. That may continue to accelerate and grow in power.
> The biggest time sink for me is validating answers so not sure I agree on that take.
But you're assuming that it'll always ne validated by humans. I'd imagine that most validation (and subsequent processing, especially going forward) will be done on machines.
By comparison with reality. The initial LLMs had "reality" be "a training set of text", when ChatGPT came out everyone rapidly expanded into RLFH (reinforcement learning from human feedback), and now there's vision and text models the training and feedback is grounded on a much broader aspect of reality than just text.
That's one way to do it, but overkill for this specific thing — self-driving cars or robotics, or natural use of smart-[phone|watch|glass|doorbell|fridge], likely sufficient.
Total surveillance may be necessary for other reasons, like making sure organised crime can't blackmail anyone because the state already knows it all, but it's overkill for AI.
Not if you source your training data from reality.
Are you treating "the internet" as "reality" with this line of questions?
The internet is the map, don't mistake the map for the territory — it's fine as a bootstrap but not the final result, just like it's OK for a human to research a topic by reading on Wikipedia but not to use it as the only source.
Sooner or later someone is going to figure out how to do active training on AI models. It's the holy grail of AI before AGI. This would allow you to do base training on a small set of very high quality data, and then let the model actively decide what it wants to train on going forward or let it "forget" what it wants to unlearn.
1. AI can do what we can do, in much the same way we can do it, because it's biologically inspired. Not a perfect copy, but close enough for the general case of this argument.
2. AI can't ever be perfect because of the same reasons we can't ever be perfect: it's impossible to become certain of anything in finite time and with finite examples.
3. AI can still reach higher performance in specific things than us — not everything, not yet — because the information processing speedup going from synapses to transistors is of the same order of magnitude as walking is to continental drift, so when there exists sufficient training data to overcome the inefficiency of the model, we can make models absorb approximately all of that information.
Does the AI need to know or the curator of the dataset? If the curator took a camera and walked outside (or let a drone wander around for a while), do you believe this problem would still arise?
For those looking to easily build on top of this or other OpenAI-compatible LLM APIs -- you can have a look at Langroid[1] (I am the lead dev): you can easily switch to cerebras (or groq, or other LLMs/Providers). E.g. after installing langroid in your virtual env, and setting up CEREBRAS_API_KEY in your env or .env file, you can run a simple chat example[2] like this:
Wow, software is hard! Imagine an entire company working to build an insanely huge and expensive wafer scale chip and your super smart and highly motivated machine learning engineers get 1/3 of peak performance on their first attempt. When people say NVIDIA has no moat I'm going to remember this - partly because it does show that they do, and partly because it shows that with time the moat can probably be crossed...
I wonder at what point does increasing LLM throughput only start to serve negative uses of AI. This is already 2 orders of magnitude faster than humans can read. Are there any significant legitimate uses beyond just spamming AI-generated SEO articles and fake Amazon books more quickly and cheaply?
The way things are going it looks like tokens/s is going to play a big role. O1 preview devours tokens and now Anthropic computer use is devouring them too. Video generation is extremely token heavy too.
It sort of is starting to look like you can linearly boost utility by exponentially scaling token usage per query. If so we might see companies slowing on scaling parameters and instead focusing on scaling token usage.
Ex-cereberas engineer here. The chip is very powerful and there is no 'one way' to do things. Rearchitecting data flow, changing up data layout, etc can lead to significant performance improvements. That's just my informed speculation. There's likely more perf somewhere
The first implementation of inference on the Wafer Scale Engine and utilized only a fraction of its peak bandwidth, compute, and IO capacity. Today’s release is the culmination of numerous software, hardware, and ML improvements we made to our stack to greatly improve the utilization and real-world performance of Cerebras Inference.
We’ve re-written or optimized the most critical kernels such as MatMul, reduce/broadcast, element wise ops, and activations. Wafer IO has been streamlined to run asynchronously from compute. This release also implements speculative decoding, a widely used technique that uses a small model and large model in tandem to generate answers faster.
They said in the announcement that they've implemented speculative decoding, so that might have a lot to do with it.
A big question is what they're using as their draft model; there's ways to do it losslessly, but they could also choose to trade off accuracy for a bigger increase in speed.
It seems they also support only a very short sequence length. (1k tokens)
Speculative decoding does not trade off accuracy. You reject the speculated tokens if the original model does not accept them, kind of like branch prediction. All these providers and third parties benchmark each other's solutions, so if there is a drop in accuracy, someone will report it. Their sequence length is 8k.
I found this on their product page, though just for peak power:
> At 16 RU, and peak sustained system power of 23kW, the CS-3 packs the performance of a room full of servers into a single unit the size of a dorm room mini-fridge.
You're not wrong, but how it is currently implemented is pretty deceptive. I would have appreciated knowing the login prompt before interacting with the page. I am curious how many bounces they have because of this one dark pattern.
"bitnet.cpp achieves speedups of 1.37x to 5.07x on ARM CPUs, with larger models experiencing greater performance gains. Additionally, it reduces energy consumption by 55.4% to 70.0%, further boosting overall efficiency. On x86 CPUs, speedups range from 2.37x to 6.17x with energy reductions between 71.9% to 82.2%. Furthermore, bitnet.cpp can run a 100B BitNet b1.58 model on a single CPU, achieving speeds comparable to human reading (5-7 tokens per second), significantly enhancing the potential for running LLMs on local devices. "
Bitnet models are just another piece in the ocean of techniques where there may possibly be alpha at large parameter counts... but no one will know until a massive investment is made, and that investment hasn't happened because the people with resources have much surer things to invest in.
There's this insufferable crowd of people who just keep going on and on about it like it's some magic bullet that will let them run 405B on their home PC but if it was so simple it's not like the 5 or so companies in the world putting out frontier models need little Timmy 3090 to tell them about the technique: we don't need it shoehorned into every single release.
You need an API key - I got one from https://cloud.cerebras.ai/ but I'm not sure if there's a waiting list at the moment - then you can do this:
Then you can run lightning fast prompts like this: Here's a video of that running, it's very speedy: https://static.simonwillison.net/static/2024/cerebras-is-fas...