What the Reuters AI copyright decision means for the music industry

Photo: Possessed Photography

Last week saw a major AI copyright case judgement in the US – and it fell in favour of human creators. The case stems all the way back to 2020, when Thomson Reuters sued the legal AI startup ROSS Intelligence, alleging that it reproduced Reuters’ legal research to create a competing, AI-powered legal platform. A judge has now rejected the AI startup’s fair use defense, writing defiantly: “None of Ross’s possible defenses holds water. I reject them all.”
Sounds like a slam dunk for creators and their representatives, right? While it is a positive sign in many respects, the reality is always much more complicated.
Why did Reuters sue?
Reuters owns a legal research platform called Westlaw, where paying users can access resources including Westlaw’s original (and this judge has now ruled, copyrightable) headnotes summarising the key points of court cases. According to the judge’s written opinion, ROSS tried to license Westlaw’s database to train its AI search tool, but Westlaw declined because it saw what ROSS was building as a competing platform. ROSS got access anyway by making a deal with a third party called LegalEase, and used it to build a platform where users could enter a legal question, and receive relevant judicial opinions. Reuters sued and ROSS claimed fair use as part of its defense.
What did the court decide in the Reuters AI case, and what precedent does it set?
Courts weigh four factors when considering a fair use defense, but the most important one is the final: how the use impacts the market value of the original, copyrighted work. Here, the judge ruled against ROSS because it found that the company copied more than 2,000 Westlaw headnotes in order to develop a directly competing product.
Specifically, this sets the precedent that copying original works to train an AI model, and then producing outputs that are substantially similar to those works for a commercial purpose that directly competes with the original, is not fair use. While this is a win for human creators arguing similar cases, it could only apply directly to a slim number of them. The vast majority will not be so clear-cut. This is especially true in sound, where it is much more difficult to draw links between inputs and outputs than with written works.
What about future works in music?
The fourth factor focuses on how outputs will impact the specific existing content fed into an AI model. This is a massive consideration for the music industry to be sure, but there are just as many fears around how generative AI platforms that train on artists’ material could impact the market for those artists’ future works, as well as the work of future artists altogether. For example, if streaming services were to load up on AI-generated background music, the Beyoncés and Taylor Swifts of the world would likely still be fine – but new generations of artists would find it even harder to break through and earn sustainable income. This is not something that is adjudicated as part of the four-factor test (nor would it make sense to, as doing so would implicate just about any company which breaks down barriers to music-making). It also reflects one of the many limits of the legal process. Courts most often focus on implications for now, not fears of the future – at least until those fears pan out and there is something or someone to sue.
What comes next after Reuters v. ROSS?
It has been five years since Reuters sued ROSS Intelligence – consider that the pandemic lockdowns had barely begun! Pending and future cases in the US could move ever-so-slightly quicker now that some precedent has been set. But because of the difficulty in distinguishing input-to-output, music cases may take even longer, and many may end up settling. This is why music cases involving lyrics, rather than sounds, may come next. The US music publishers’ ongoing lawsuit against Anthropic and its AI tool Claude is one to watch.
The two parties recently reached a preliminary settlement to prevent the judge from shutting Claude down. Anthropic has also agreed to implement “guardrails” preventing Claude from producing outputs that are too similar to original works – although this also has the impact of taking courts’ attention away from the training phase (the side the music industry tends to focus on) towards the output phase (which, again, is much murkier). These developments seem to increase the chances of a wider agreement between the music industry and the AI company, anyhow – although the terms would be kept under wraps.
Notice a pattern here? Perhaps the biggest takeaway is that every step in this legal journey might be considered forward, backward, or sideways, depending how you look at it. Just as plaintiffs in pending (and future) AI music cases will raise the Reuters v. ROSS precedent, defendants will likely harp on the case’s differences – not just written works versus audio, but also the fact that ROSS’ tool was not specifically generative AI, and the copying involved an intermediary. The Reuters decision may be considered a small victory, but there is – and will always be – a much wider range of implications.
There is a comment on this post, add your opinion.