Anubis functions as an innovative CAPTCHA that prevents web crawlers from effectively scraping content. Instead of verifying human visitors, it imposes a computational challenge, making bot operations costly. Named after the Egyptian deity, Anubis aims to safeguard against large companies relying on Large Language Models (LLMs) that require significant volumes of data. While existing measures like robots.txt provide limited protection, Anubis addresses persistent repeat visits and scraping activities that organizations might exploit, causing further issues with web content exploitation.
Anubis operates as a CAPTCHA test, targeting web crawlers by implementing a proof of work challenge, increasing operational costs for companies using LLM bots.
The tool weighs the willingness of bot operators to solve computational challenges, deterring them through expensive resource usage associated with large-scale data centers.
Collection
[
|
...
]