Search Engines, AI, And The Long Fight Over Fair Use
Briefly

Search Engines, AI, And The Long Fight Over Fair Use
"Long before generative AI, copyright holders warned that new technologies for reading and analyzing information would destroy creativity. Internet search engines, they argued, were infringement machines-tools that copied copyrighted works at scale without permission. As they had with earlier information technologies like the photocopier and the VCR, copyright owners sued. Courts disagreed. They recognized that copying works in order to understand, index, and locate information is a classic fair use-and a necessary condition for a free and open internet."
"Copying that works in order to understand them, extract information from them, or make them searchable is transformative and lawful. That's why search engines can index the web, libraries can make digital indexes, and researchers can analyze large collections of text and data without negotiating licenses from millions of rightsholders. These uses don't substitute for the original works; they enable new forms of knowledge and expression."
Copyright owners historically challenged technologies that read and analyze content, but courts have held that copying to understand, index, and locate information constitutes fair use. Copying for analysis is transformative because it enables new functions without substituting for original works, allowing search engines, libraries, and researchers to create indexes and analyze large datasets without licensing each rightsholder. Training AI models involves analyzing patterns across many works to extract statistical relationships rather than reproduce originals, which is similarly transformative. Requiring permission for such analysis would impose licensing burdens that would impede search, research, and the development of new knowledge and expressive tools.
Read at Electronic Frontier Foundation
Unable to calculate read time
[
|
]