Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The crawlers themselves are not that different: it is their number, how the information is used once scraped (including referencing or lack thereof), and if they obey the rules:

1. Their number: every other company and the mangy mutt that is its mascot is scraping for LLMs at the moment, so you get hit by them far more than you get hit by search engine bots and similar. This makes them harder to block too, because even ignoring tricks like using botnets to spread requests over many source addresses (potentially the residential connections of unwitting users infected by malware) the share number coming from so many places, new places all the time, means you can not maintain a practical blocklist of source addresses. The number of scrapers out there means that small sites can be easily swamped, much like when HN, slashdot, or a popular reddit subsection, links to a site, and it gets “hugged to death” by a sudden glut of individual people who are interested.

2. Use of the information: Search engines actually provide something back: sending people to your site. Useful if that is desirable which in many cases it is. LLMs don't tend to do that though: by the very nature of LLMs very few results from them come with any indication of the source of the data they use for their guesswork. They scrape, they take, they give nothing back. Search engines had a vested interest in your site surviving as they don't want to hand out dead links, those scraping for LLMs have no such requirement because they can still summarise your work from what is effectively cached within their model. This isn't unique to LLMs, go back a few years to the pre-LLM days and you will find several significant legal cases about search engines offering summaries of the information found instead of just sending people to the site where the information is.

3. Ignoring rules: Because so many sites are attempting to block scrapers now, usually at a minimum using accepted methods to discourage it (robots.txt, nofollow attributes, etc.), these signals are just ignored. Sometimes this is malicious with people running the scrapers simply not caring despite knowing the problem they could create, sometimes it is like the spam problem in mail: each scraper thinks it'll be fine because it is only them, with each of the many also thinking the same thing… With people as big as Meta openly defending piracy as just fine for the purposes of LLM training, others see that as a declaration of open season. Those that are malicious or at least amoral (most of them) don't care. Once they have scraped your data they have, as mentioned above, no vested interest in whether your site lives or dies (either by withing away from lack of attention or falling over under their load to never be brought back up), in fact they might have incentive to want your site dead: it would no longer compete with the LLM as a source of information.

No one of these is the problem, but together they are a significant problem.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: