Hacker News new | past | comments | ask | show | jobs | submit login

Its summarisation, who cares if its right as long as you feel confident after reading it? /s.

In my experience, even GPT-4o is terrible at revealing information from things longer than a few pages.

It might be an issue with dimensionality reduction in general though. If you think about it, you can't really take away much of what is contained within any given amount of text with text, unless the source was produced extremely inefficiently.

Producing outlines or maybe a form of abstract, it seems to be okay at, but you would never really know where it fails unless you read the entirety of the source text first to begin with. IMO, its not worth risking unless you plan to read the source anyway or its not really important.




Try to walk through a Wikipedia article having an LLM summarize every few paragraphs, its often wildly inaccurate.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: