New Court Ruling Allows AI Training on Copyrighted Works — A Blow to Creators
A recent decision by a U.S. District Court has sparked major controversy. The court has ruled that companies can legally use copyrighted works for the training of artificial intelligence (AI) models. This decision is being seen as a serious setback for artists, writers, musicians, and other content creators.
For years, creators have raised concerns about how AI companies collect and use their content. These companies often scrape websites and scan books to feed data into large language models (LLMs), such as chatbots or image generators. The content used for training is rarely licensed, and creators usually get no credit or payment.
New Court Ruling Allows AI Training on Copyrighted Works — A Blow to Creators
The latest ruling came from the U.S. District Court for the Northern District of California. The case involved authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson. They filed a lawsuit against the AI company Anthropic in 2024. The authors claimed that Anthropic used pirated versions of their books to train its Claude AI models.
The court, under Judge William Alsup, gave a mixed decision. But overall, the judgment favoured Anthropic. Judge Alsup ruled that converting printed books to digital versions and then using them for AI training can be considered “fair use” under U.S. copyright law. He even compared it to teaching children how to write by showing them good examples, though many say this comparison doesn’t hold up for powerful AI systems.
However, the court did find one area where Anthropic may be at fault. Using pirated copies of a book does not fall under fair use. The court will hold a separate trial to determine how much Anthropic owes in damages for that.
This ruling is a major blow to content makers. It allows AI developers to freely use published works without approval. For those whose livelihood depends on their original work, this could mean fewer job opportunities and reduced income.
AI models often rely on human creativity to function. They learn how to write, draw, or compose music by analysing countless examples created by real people. But when these models produce work, they do so without giving credit to those they learned from. Worse, they may even take away traffic from original websites and publishers.
For instance, many users now ask AI models for summaries or instructions, instead of visiting the actual website that holds the source material. This hurts publishers who depend on visitors for ad revenue. Some sites, like AppleInsider, have found their tutorials copied and shuffled by AI, making them inaccurate.
Some companies, like Apple, are trying to act responsibly. Apple has reportedly paid millions to license content from news outlets and image libraries like Shutterstock. Others have blocked AI bots from accessing their sites using tools like robots.txt. But that only works if the AI company respects the rules.
This court ruling sets a strong legal precedent. It will likely influence future lawsuits involving AI and copyright. Meanwhile, in Europe, there are ongoing efforts to regulate AI more strictly. The U.S., however, is seeing increased lobbying from tech firms to delay such regulations.
For now, the fight between human creativity and AI continues, with no clear resolution in sight.