Amazon quietly blocks AI bots from Meta, Google, Huawei and more
Briefly

Amazon updated its robots.txt to add six more AI-related crawlers, explicitly prohibiting bots from Meta, Google, Huawei, Mistral and others. The changes follow earlier blocks targeting crawlers such as Anthropic's Claude, Perplexity and Google's Project Mariner. Robots.txt directives are advisory rather than legally enforceable and rely on crawlers behaving properly. The move reflects a more aggressive approach toward third-party AI tools that could scrape product pages, monitor prices or attempt automated purchases. The marketplace contains vast e-commerce data and supports a $56 billion advertising business, creating significant commercial stakes for control over that data.
"Amazon is desperately trying to stop AI companies from training models on its data," Kaziukėnas wrote in a LinkedIn post on Thursday. "I think it is too late to stop AI training - Amazon's data is already in the datasets ChatGPT and others are using. But Amazon is definitely not interested in helping anyone build the future of AI shopping. If that is indeed the future, Amazon wants to build it itself."
The update builds on earlier restrictions Amazon added at least a month ago targeting crawlers from Anthropic's Claude, Perplexity and Google's Project Mariner agents, The Information reported. Robots.txt files are a standard tool that websites use to give instructions to automated crawlers like search engines. While restrictions outlined in robots.txt files are advisory rather than enforceable, they act as signposts for automated systems - that is, if the crawlers are "well-behaved," they are expected to respect the block, according to Kaziukėnas.
Read at Digiday
[
|
]