Cloudflare has launched new features that allow website owners to control access to their content by AI crawlers, giving them the option to permit or deny access. This initiative requires AI companies to clearly state their purpose for crawling, whether for training, inference, or search. The approach aims to establish a permission-based model in response to concerns about content scraping. Matthew Prince emphasized the importance of safeguarding online content creators while enabling AI innovation.
Cloudflare has announced new capabilities aimed at blocking AI crawlers from accessing content without permission or compensation. Available by default from today (1st July), the web infrastructure firm will allow website owners to choose if they want AI crawlers to access content, giving them the flexibility to choose how AI companies can use it.
‘If the internet is going to survive the age of AI, we need to give publishers the control they deserve and build a new economic model that works for everyone - creators, consumers, tomorrow's AI founders, and the future of the web itself,’ said Matthew Prince, co-founder and CEO of Cloudflare.
This latest attempt is 'taking the next step' to enforce a permission-based model, the company noted, with AI companies now required to obtain 'explicit permission' from a website before scraping content.
Prince noted that AI crawlers have been ‘scraping content without limits’ and the company aims to ‘put the power back in the hands of creators while still helping AI companies innovate.’
Collection
[
|
...
]