On November 11, 2025, the Munich Regional Court (Landgericht München I) handed down what may prove to be the most consequential generative-AI ruling on the continent so far. In GEMA v. OpenAI, the court held that ChatGPT had infringed the copyrights of nine German songwriters by memorizing and reproducing their lyrics — both during training and in user-facing outputs. It is the first ruling by a major EU court to find a general-purpose AI provider directly liable under national copyright law, and it lands squarely in the run-up to enforcement of the EU AI Act's transparency obligations on general-purpose AI (GPAI) models under Article 53.
The decision is narrow on its face but expansive in implication. The Munich court reportedly rejected OpenAI's argument that lyric reproduction was incidental to a fair text-and-data-mining (TDM) process under Article 4 of the 2019 Copyright in the Digital Single Market Directive. That provision permits commercial TDM unless the rightsholder has reserved use in a machine-readable form. GEMA had publicly reserved rights for its repertoire as early as 2024. The court found that OpenAI bore responsibility for outputs that reproduced protected lyrics on demand, regardless of whether a user prompted them.
Why This Matters Beyond Germany
The judgment will not stay confined to Munich. Article 53 of the EU AI Act — which entered application for GPAI models on August 2, 2025 — requires providers to publish a sufficiently detailed summary of training data and to implement a policy respecting EU copyright law, including the Article 4 opt-out. Until now, those obligations existed on paper without a domestic court interpretation. GEMA v. OpenAI hands national courts and the AI Office a usable doctrinal template: if rightsholders have reserved use, the burden shifts decisively to the model provider to prove compliance.
This is not, in itself, a bad outcome. A working opt-out regime is precisely what the 2019 Directive promised, and the rule of law requires that promise be honored. The problem is calibration. Generative-AI training is a probabilistic process at planetary scale; memorization of any single work is, in most cases, an emergent statistical accident, not a deliberate act of copying. Treating every memorized fragment as a per-work infringement — multiplied across millions of opted-out works — could expose providers to liability orders of magnitude greater than the underlying economic harm.
The Innovation Cost of Maximalist Enforcement
The EU is already an unfavorable jurisdiction in which to train a frontier model. Compute is more expensive than in the US; energy costs are higher; and the regulatory perimeter — AI Act, GDPR, Digital Services Act, Digital Markets Act — is the world's densest. Mistral, the bloc's flagship AI company, has repeatedly warned that overlapping compliance regimes risk hollowing out European model development before it scales. The European Commission's own 2024 AI Innovation Package acknowledged this tension, pledging support for sovereign compute and SME-friendly compliance.
A copyright doctrine that treats memorization as strict-liability infringement, without regard to commercial substitution or de minimis use, would compound the problem. It would also create a perverse incentive: providers would default to training on lower-quality, public-domain, or licensed-only corpora in Europe, while training their globally competitive models elsewhere. The result is not stronger copyright protection but a shift of value capture — and editorial influence — outside the EU's regulatory reach.
What Proportionate Enforcement Looks Like
The Munich ruling does not require a maximalist reading, and the EU AI Office should resist one. Three principles should guide what comes next:
- Tier remedies to harm. Output-side infringement (a model reciting a song on request) is a different harm from training-side ingestion. The first is addressable with output filters and licensing; the second should be governed by the TDM opt-out, not by retroactive damages for works already trained on.
- Standardize machine-readable opt-outs. The Directive's promise of machine-readable reservation has been undermined by a chaos of formats — robots.txt extensions, TDMrep, ai.txt, bespoke metadata. The Commission should expedite a binding technical standard so compliance is verifiable, not litigated.
- Foster collective licensing. GEMA, like its sister societies across Europe, is well-positioned to offer blanket AI-training licenses, as it has done for streaming and radio for decades. A market solution — repertoire access at predictable, audited rates — would serve creators better than a decade of fragmented litigation.
A Test Case the EU Can Still Get Right
The Munich ruling is not the end of the matter. OpenAI has indicated it will appeal, and parallel proceedings are pending in France, Italy, and the Netherlands. The Court of Justice of the European Union will almost certainly be asked to clarify the interaction between Article 4 of the Copyright Directive and Article 53 of the AI Act. Those harmonizing rulings, more than any single national verdict, will shape whether Europe ends up with a workable copyright equilibrium or a punitive one.
Creators deserve compensation when their work materially powers a commercial AI system. But proportionate enforcement — calibrated to actual substitution, channeled through collective licensing, and standardized at the technical layer — would serve both rightsholders and the European AI industry. The alternative is a copyright regime that protects the past while outsourcing the future.