EU AI copyright

Munich's GEMA v. OpenAI Ruling: Why the EU's Copyright Reckoning Needs Calibration, Not Capitulation

Germany's first major AI copyright verdict reshapes how Article 53 of the EU AI Act will bite — and risks tilting the balance away from innovation.

EU AI Copyright: The Munich Inflection Point People of Internet Research · EU 9 GEMA lawsuit plaintiffs German songwriters whose lyrics Ch… Aug 2025 AI Act GPAI rules in force Article 53 obligations on general-… 2019/790 TDM opt-out directive EU Copyright Directive Article 4 l… <10% EU share of global AI compute Estimates from 2024 industry analy… peopleofinternet.com

Key Takeaways

On November 11, 2025, the Munich Regional Court (Landgericht München I) handed down what may prove to be the most consequential generative-AI ruling on the continent so far. In GEMA v. OpenAI, the court held that ChatGPT had infringed the copyrights of nine German songwriters by memorizing and reproducing their lyrics — both during training and in user-facing outputs. It is the first ruling by a major EU court to find a general-purpose AI provider directly liable under national copyright law, and it lands squarely in the run-up to enforcement of the EU AI Act's transparency obligations on general-purpose AI (GPAI) models under Article 53.

The decision is narrow on its face but expansive in implication. The Munich court reportedly rejected OpenAI's argument that lyric reproduction was incidental to a fair text-and-data-mining (TDM) process under Article 4 of the 2019 Copyright in the Digital Single Market Directive. That provision permits commercial TDM unless the rightsholder has reserved use in a machine-readable form. GEMA had publicly reserved rights for its repertoire as early as 2024. The court found that OpenAI bore responsibility for outputs that reproduced protected lyrics on demand, regardless of whether a user prompted them.

Why This Matters Beyond Germany

The judgment will not stay confined to Munich. Article 53 of the EU AI Act — which entered application for GPAI models on August 2, 2025 — requires providers to publish a sufficiently detailed summary of training data and to implement a policy respecting EU copyright law, including the Article 4 opt-out. Until now, those obligations existed on paper without a domestic court interpretation. GEMA v. OpenAI hands national courts and the AI Office a usable doctrinal template: if rightsholders have reserved use, the burden shifts decisively to the model provider to prove compliance.

This is not, in itself, a bad outcome. A working opt-out regime is precisely what the 2019 Directive promised, and the rule of law requires that promise be honored. The problem is calibration. Generative-AI training is a probabilistic process at planetary scale; memorization of any single work is, in most cases, an emergent statistical accident, not a deliberate act of copying. Treating every memorized fragment as a per-work infringement — multiplied across millions of opted-out works — could expose providers to liability orders of magnitude greater than the underlying economic harm.

The Innovation Cost of Maximalist Enforcement

The EU is already an unfavorable jurisdiction in which to train a frontier model. Compute is more expensive than in the US; energy costs are higher; and the regulatory perimeter — AI Act, GDPR, Digital Services Act, Digital Markets Act — is the world's densest. Mistral, the bloc's flagship AI company, has repeatedly warned that overlapping compliance regimes risk hollowing out European model development before it scales. The European Commission's own 2024 AI Innovation Package acknowledged this tension, pledging support for sovereign compute and SME-friendly compliance.

A copyright doctrine that treats memorization as strict-liability infringement, without regard to commercial substitution or de minimis use, would compound the problem. It would also create a perverse incentive: providers would default to training on lower-quality, public-domain, or licensed-only corpora in Europe, while training their globally competitive models elsewhere. The result is not stronger copyright protection but a shift of value capture — and editorial influence — outside the EU's regulatory reach.

What Proportionate Enforcement Looks Like

The Munich ruling does not require a maximalist reading, and the EU AI Office should resist one. Three principles should guide what comes next:

A Test Case the EU Can Still Get Right

The Munich ruling is not the end of the matter. OpenAI has indicated it will appeal, and parallel proceedings are pending in France, Italy, and the Netherlands. The Court of Justice of the European Union will almost certainly be asked to clarify the interaction between Article 4 of the Copyright Directive and Article 53 of the AI Act. Those harmonizing rulings, more than any single national verdict, will shape whether Europe ends up with a workable copyright equilibrium or a punitive one.

Creators deserve compensation when their work materially powers a commercial AI system. But proportionate enforcement — calibrated to actual substitution, channeled through collective licensing, and standardized at the technical layer — would serve both rightsholders and the European AI industry. The alternative is a copyright regime that protects the past while outsourcing the future.

Sources & Citations

  1. GEMA press release on Munich ruling (Nov 2025)
  2. EU AI Act — Article 53 (GPAI obligations)
  3. EU Copyright in the Digital Single Market Directive (2019/790)
  4. European Commission AI Innovation Package (2024)
Share this analysis: