


Court Issues Another Fair Use Ruling Backing AI Developers
Meta gets a summary judgment win against authors, including Sarah Silverman
It’s been a week for AI-related court cases, with two US federal judges handing down separate (and somewhat contradictory) rulings on whether using copyrighted materials without permission to train AI can be considered “fair use.”
The first ruling:
As reported here, on Monday June 23 Judge William Alsup sided partially with AI company Anthropic, declaring that training its AI model on the copyrighted materials of the three authors who brought the lawsuit counted as fair use.
The second ruling:
On Wednesday June 25 in the same court – the US District Court for the Northern District of California – Judge Vince Chhabria ruled on a class action against Meta (developer of the Llama large language model) brought by 13 writers including Sarah Silverman, Richard Kadrey, Junot Diaz and others.
As per Music Business Worldwide, they claimed Llama had been trained on their works without permission, and that it “would even reproduce parts of those works when prompted.”
While determining that training AI on copyrighted material without permission is not fair use in most cases, the judge granted Meta’s request for a partial summary judgment.
He found Llama is “not capable of generating enough text from the plaintiff’s books to matter.”
In regards to the authors’ claim that by Meta using their works without permission it diminished their ability to license their works for AI training he said, “the plaintiffs are not entitled to the market for licensing their works as AI training data.”
His full summary can be found here.
Good news for rightsholders?
As per MBW, Chhabria offered a glimmer of hope for rightsholders when highlighting an argument that could work in their favor, namely that “allowing tech companies to train AI on copyrighted works would severely harm the market for human-created works.”
Chhabria also criticized Judge Alsup’s argument regarding the transformative nature of AI, and the parallel he drew between training AI models and training schoolchildren to write well.
He stated: “Using books to teach children to write is not remotely like using books to create a product that a single individual could employ to generate countless competing works with a miniscule fraction of the time and creativity it would otherwise take. This inapt analogy is not a basis for blowing off the most important factor in the fair use analysis.”
He also dismissed an oft-heard argument by AI companies that forcing them to license materials for training would slow down or even stop the development of the technology.
Common ground:
Both judges agreed on one thing: using pirated material to train AI is not acceptable.
Anthropic will front court in December to defend using material from online libraries known to offer pirated books to train its AI model.
In the case against Meta, the judge has allowed one part of the authors’ case to proceed, in which they allege Meta used a torrent file-sharing network to download illegal copies of books, and stripped right management information from them, violating the Digital Millennium Copyright Act.
It’s been a week for AI-related court cases, with two US federal judges handing down separate (and somewhat contradictory) rulings on whether using copyrighted materials without permission to train AI can be considered “fair use.”
The first ruling:
As reported here, on Monday June 23 Judge William Alsup sided partially with AI company Anthropic, declaring that training its AI model on the copyrighted materials of the three authors who brought the lawsuit counted as fair use.
The second ruling:
On Wednesday June 25 in the same court – the US District Court for the Northern District of California – Judge Vince Chhabria ruled on a class action against Meta (developer of the Llama large language model) brought by 13 writers including Sarah Silverman, Richard Kadrey, Junot Diaz and others.
As per Music Business Worldwide, they claimed Llama had been trained on their works without permission, and that it “would even reproduce parts of those works when prompted.”
While determining that training AI on copyrighted material without permission is not fair use in most cases, the judge granted Meta’s request for a partial summary judgment.
He found Llama is “not capable of generating enough text from the plaintiff’s books to matter.”
In regards to the authors’ claim that by Meta using their works without permission it diminished their ability to license their works for AI training he said, “the plaintiffs are not entitled to the market for licensing their works as AI training data.”
His full summary can be found here.
Good news for rightsholders?
As per MBW, Chhabria offered a glimmer of hope for rightsholders when highlighting an argument that could work in their favor, namely that “allowing tech companies to train AI on copyrighted works would severely harm the market for human-created works.”
Chhabria also criticized Judge Alsup’s argument regarding the transformative nature of AI, and the parallel he drew between training AI models and training schoolchildren to write well.
He stated: “Using books to teach children to write is not remotely like using books to create a product that a single individual could employ to generate countless competing works with a miniscule fraction of the time and creativity it would otherwise take. This inapt analogy is not a basis for blowing off the most important factor in the fair use analysis.”
He also dismissed an oft-heard argument by AI companies that forcing them to license materials for training would slow down or even stop the development of the technology.
Common ground:
Both judges agreed on one thing: using pirated material to train AI is not acceptable.
Anthropic will front court in December to defend using material from online libraries known to offer pirated books to train its AI model.
In the case against Meta, the judge has allowed one part of the authors’ case to proceed, in which they allege Meta used a torrent file-sharing network to download illegal copies of books, and stripped right management information from them, violating the Digital Millennium Copyright Act.
It’s been a week for AI-related court cases, with two US federal judges handing down separate (and somewhat contradictory) rulings on whether using copyrighted materials without permission to train AI can be considered “fair use.”
The first ruling:
As reported here, on Monday June 23 Judge William Alsup sided partially with AI company Anthropic, declaring that training its AI model on the copyrighted materials of the three authors who brought the lawsuit counted as fair use.
The second ruling:
On Wednesday June 25 in the same court – the US District Court for the Northern District of California – Judge Vince Chhabria ruled on a class action against Meta (developer of the Llama large language model) brought by 13 writers including Sarah Silverman, Richard Kadrey, Junot Diaz and others.
As per Music Business Worldwide, they claimed Llama had been trained on their works without permission, and that it “would even reproduce parts of those works when prompted.”
While determining that training AI on copyrighted material without permission is not fair use in most cases, the judge granted Meta’s request for a partial summary judgment.
He found Llama is “not capable of generating enough text from the plaintiff’s books to matter.”
In regards to the authors’ claim that by Meta using their works without permission it diminished their ability to license their works for AI training he said, “the plaintiffs are not entitled to the market for licensing their works as AI training data.”
His full summary can be found here.
Good news for rightsholders?
As per MBW, Chhabria offered a glimmer of hope for rightsholders when highlighting an argument that could work in their favor, namely that “allowing tech companies to train AI on copyrighted works would severely harm the market for human-created works.”
Chhabria also criticized Judge Alsup’s argument regarding the transformative nature of AI, and the parallel he drew between training AI models and training schoolchildren to write well.
He stated: “Using books to teach children to write is not remotely like using books to create a product that a single individual could employ to generate countless competing works with a miniscule fraction of the time and creativity it would otherwise take. This inapt analogy is not a basis for blowing off the most important factor in the fair use analysis.”
He also dismissed an oft-heard argument by AI companies that forcing them to license materials for training would slow down or even stop the development of the technology.
Common ground:
Both judges agreed on one thing: using pirated material to train AI is not acceptable.
Anthropic will front court in December to defend using material from online libraries known to offer pirated books to train its AI model.
In the case against Meta, the judge has allowed one part of the authors’ case to proceed, in which they allege Meta used a torrent file-sharing network to download illegal copies of books, and stripped right management information from them, violating the Digital Millennium Copyright Act.
William Alsup
Vince Chhabria
Anthropic
Meta
Sarah Silverman
Llama
Richard Kadrey
Junot Diaz
Digital Millennium Copyright Act
AI Copyright Battles
AI and Copyright
Legal Battles Over AI Content
AI Training Controversies
AI And Copyright Law
AI's Impact On Human Creators
Industry Litigation
AI Copyright Debate
Judicial Split On AI Fair Use
Fair Use Doctrine
Copyright Infringement
Litigation
Legal Disputes
Policy & Legal
AI Copyright Litigation
United States
San Francisco, US
👋 Disclosures & Transparency Block
- This story was written with information sourced from Music Business Worldwide and Digital Music News.
- We covered it because of the implications of these cases on rightsholders in multiple fields, including the music industry.
📨 Subscribe to NIF
Get news dropped in your inbox 👇
📨 Subscribe to NIF
Get news dropped in your inbox 👇
Related Articles

Policy & Legal
Nov 6, 2025
1 min read
Spotify Faces Class Action Lawsuit Alleging “Pay For Play” Discovery Mode
The claim states that third-party payments can influence music placements in Discovery Mode and editorial playlists

Policy & Legal
Nov 5, 2025
1 min read
Suno Sued by Danish Songwriter Collecting Society Koda
It alleges “the biggest theft in music history”

Policy & Legal
Nov 4, 2025
1 min read
StubHub Faces Class Action Over ‘FanProtect Guarantee’
A disgruntled Taylor Swift fan is the lead plaintiff

Spotify Faces Class Action Lawsuit Alleging “Pay For Play” Discovery Mode
The claim states that third-party payments can influence music placements in Discovery Mode and editorial playlists

Harry Levin
Policy
Nov 6, 2025

Suno Sued by Danish Songwriter Collecting Society Koda
It alleges “the biggest theft in music history”

Rod Yates
Policy
Nov 5, 2025

StubHub Faces Class Action Over ‘FanProtect Guarantee’
A disgruntled Taylor Swift fan is the lead plaintiff

Rod Yates
Policy
Nov 4, 2025

Spotify Faces Class Action Claiming Fraud in Drake Streams
The artist himself is not, however, implicated

Rod Yates
Policy
Nov 4, 2025

Music Artists Coalition Lays Out Ethical Guidelines in Wake of UMG/Udio Deal
The artist advocacy group released a statement in response to the new partnership between the major label and the prominent AI company

Harry Levin
Policy
Nov 3, 2025

Udio Allows Downloads for 48 Hours Following UMG Deal Outcry
The gen AI music platform yielded to the demands of angry subscribers

Rod Yates
Policy
Nov 3, 2025



