Hacker News new | past | comments | ask | show | jobs | submit login
Groq (groq.com)
11 points by gregsadetsky 11 months ago | hide | past | favorite | 7 comments



This is insanely fast, obviously a game changer over time. You should try the demo!

This seems to be using custom inference-only HW. It makes a ton of sense to use different HW for inference vs training, the requirements are different.

Nvidia, as far as I can tell, is focusing all-in on training and hoping the same HW will be used for inference.

Exciting times!


Hi there, I work for Groq. That's right. We love graphics processors for training but for inference our language processor (LPU) is by far the fastest and lowest latency. Feel free to ask me anything.


What's the scale of hardware behind this demo, in terms of watts, transistors and cost?


Are they only available on the cloud? Are you planning on releasing a consumer version?


Mostly available as a service via cloud API at the moment. The systems themselves are too big for consumers but we will sell systems to corporations.


Refuses to talk in my native language sometimes. When complies, makes basic errors in spelling and response is much shorter. Is it AI revolt?





Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: