I can't easily type that out - and once the format can't be read / editing in a simple text editor, I'm starting to lean towards a nice binary format like protobuf.
This is insanely fast, obviously a game changer over time. You should try the demo!
This seems to be using custom inference-only HW. It makes a ton of sense to use different HW for inference vs training, the requirements are different.
Nvidia, as far as I can tell, is focusing all-in on training and hoping the same HW will be used for inference.
Hi there, I work for Groq. That's right. We love graphics processors for training but for inference our language processor (LPU) is by far the fastest and lowest latency. Feel free to ask me anything.
I was debugging a Heisenbug once, developing embedded FW for a mobile phone.
After some time, I noticed that the phone seemingly only crashed in one area of the open office floorplan where I was working.
I started walking around the office testing this theory, not really believing it. But after a while, I had hard evidence that the bug would only manifest once I entered that part of the office.
When I came to terms that I wasn’t hallucinating, I realised what the problem was. There was poor reception in that part of the office, causing the phone’s modem to switch from 4G wideband to narrowband (glossing over details here), which triggered the bug.
Easy to see with hindsight, but I was very confused there and then
In case the author sees this:
Thank you for enabling CORS so that it's possible to plot examples from other sites. It would be awesome if the Content-Range header was allowed as well
Thank you for csvbase, today is the first time I've seen it.
I believe PapaParse, a JS library for parsing CSV files, uses Content-Range to stream large CSV files in chunks.
https://csvplot.com uses PapaParse under the hood, I saw a warning in the dev console and posted here. I'm not sure why it seemingly works fine anyway.
My guess would be iHeartMedia