Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I was contrasting FiNER, GliNER, and Smolagents in a recent blog post on my substack and while the first two are fast and provide somewhat good results, running a LLM locally is 10x better easily.


Would love to read that post - we’re considering using GliNER for discrete parts of our ingestion pipeline where we assumed it would be a great perf/$ drop-in for larger models.



Thank you




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: