Anthropic has dismantled and digitized millions of physical books to train its AI model Claude, and then destroyed them. The practice has emerged in an ongoing copyright case in the US. The court has ruled that Anthropic’s use of books without the authors’ permission is “fair use,” giving the company partial legal approval.
Anthropic destroyed millions of books
The company scanned the pages of the books it purchased, digitized them, and destroyed the physical copies. However, this scanned data is not shared with the public. US District Judge William Alsup ruled that Anthropic’s use of books without obtaining permission from their legal owners was legal. This decision paves the way for AI companies to purchase copyrighted works and use them for educational purposes.

The court based its decision on the “first sale doctrine.” This rule of law grants the author the right to use the work, while also allowing the work to be sold second-hand. However, any abuse of this rule remains the responsibility of AI companies.
Authors, publishers, and archivists find the physical destruction of books unethical and unnecessary. It was also revealed that Anthropic’s education also uses pirated book archives. As a result, the company is facing a new lawsuit filed in December. The court ruled that the use of pirated works was illegal and ordered Anthropic to pay up to $150,000 in damages for each unauthorized book.
This controversial practice by Anthropic has once again highlighted how important and complex copyright and ethical issues are in the development of artificial intelligence. What do you think about this issue? You can share your views with us in the comments section below.