Recent decisions in Kadrey v. Meta and Bartz v. Anthropic have drawn attention for their implications on AI training and copyright law. In both cases, federal judges found that using copyrighted books to train large language models (LLMs) could qualify as “transformative” under the fair use doctrine. This suggests that, under certain conditions, AI developers may be shielded from liability when repurposing creative works for model training.
However, these rulings are narrow in scope and should not be viewed as definitive guidance. The courts emphasized that their decisions were based on the specific facts presented—particularly the lack of demonstrated market harm by the plaintiffs. Moreover, Judge Alsup’s opinion in the Anthropic case highlighted that even if a use is transformative, the method of acquiring training data (e.g., through pirated sources) could still raise legal concerns.
For legal and compliance teams, these cases are just one piece of a much larger puzzle. They underscore the importance of evaluating both the purpose of use and the provenance of training data. As the legal framework around AI and copyright continues to evolve, companies should remain cautious and seek tailored advice when navigating fair use in this rapidly developing area.