In November 2025, the Munich Regional Court (Landgericht München I) delivered what is widely regarded as Europe's first substantive judicial ruling on AI training data copyright, finding that OpenAI infringed German copyright law by ingesting and reproducing lyrics from songs by artists including Herbert Grönemeyer and Reinhard Mey when training ChatGPT. The case, brought by Germany's music collecting society GEMA, has become an inflection point — not because it resolves the underlying tension between generative AI and copyright, but because it confirms how unresolved that tension still is.
For European policymakers, the ruling lands at an awkward moment. The EU AI Act's Article 53 transparency obligations for general-purpose AI models took effect in August 2025, requiring developers to publish a "sufficiently detailed summary" of training content and to comply with EU copyright law — explicitly including rightsholders' opt-outs under the 2019 Copyright in the Digital Single Market (CDSM) Directive. The Munich decision is the first real-world stress test of that framework. It will not be the last.
What the Court Actually Decided
According to reporting from Reuters and the FT, the Munich court rejected OpenAI's argument that ChatGPT does not "store" copyrighted works in any infringing sense, and rejected its reliance on the CDSM's text-and-data-mining (TDM) exception under Article 4. The key finding: where memorised lyrics can be reproduced verbatim on demand, the training process itself constitutes a relevant act of reproduction — and GEMA's general reservation of rights on behalf of its members was a valid opt-out under Article 4(3).
The damages awarded were reportedly modest, but the precedential weight is significant. The ruling effectively says three things:
- Collective rightsholder opt-outs are enforceable, even if not embedded in machine-readable metadata on every individual work.
- The TDM exception is not a free pass for commercial generative AI training, particularly where outputs reproduce protected expression.
- Liability attaches to the model developer, not just the deployer or end user.
The Innovation Cost Is Real
It is tempting to frame this as a clean victory for creators. The reality is messier. Europe is already a distant third in foundation model development behind the United States and China. The Stanford AI Index has consistently shown that the vast majority of notable models since 2020 have originated in the US, with China second and the EU lagging. A legal regime in which every collecting society in every member state can assert blanket opt-outs — enforced through national courts applying national copyright doctrines — risks turning training data acquisition into a 27-jurisdiction licensing puzzle that only the largest US labs can afford to navigate.
This is not a hypothetical concern. Mistral, Aleph Alpha, and Black Forest Labs are among the few European foundation model players of any scale. Each operates on capital that is a fraction of what OpenAI, Anthropic, or Google DeepMind command. Asymmetric compliance costs do not produce a level playing field; they produce a smaller field, dominated by whoever can write the biggest licensing cheques.
A Proportionate Path Forward
None of this means rightsholders should be ignored. Creators deserve transparency about how their works are used and, where appropriate, compensation. But the policy response to Munich should not be reflexive maximalism. Three principles should guide what comes next:
1. Standardise the opt-out, don't multiply the gatekeepers
The CDSM Directive requires opt-outs to be expressed in a "machine-readable" way for works available online. The European Commission and standards bodies should accelerate work on a common technical protocol — building on initiatives such as the IETF's AI preferences working group and the C2PA content credentials standard — so that compliance is a matter of reading a header, not negotiating with dozens of collecting societies.
2. Distinguish training from memorisation
The Munich court's focus on verbatim reproduction is instructive. The harm rightsholders actually suffer is not abstract "training" but concrete substitution — outputs that displace demand for the original. Liability regimes that target output-level infringement, paired with robust filtering obligations, are more proportionate than those that treat all ingestion as suspect.
3. Build a workable collective licensing layer
The music industry already has decades of experience with blanket licensing through bodies like GEMA, SACEM, and PRS for Music. Extending that infrastructure to AI training — at predictable, ex ante rates — would give developers legal certainty and creators reliable revenue, without forcing every dispute into court.
The Brussels Test
The AI Office, now responsible for enforcing Article 53, faces a choice. It can read Munich as a green light for maximalist transparency demands and aggressive enforcement against non-EU developers — a path that will harden the Brussels Effect into a Brussels Wall. Or it can use the moment to push for harmonised, proportionate guidance that protects rightsholders while keeping Europe in the foundation model race.
The TikTok investigation, the X DSA case, and now GEMA all point in the same direction: Europe is increasingly comfortable with confrontational enforcement of digital rules. That posture has costs. If the EU wants to be more than a regulatory superpower — if it wants any meaningful share of the AI value chain it is so eagerly trying to govern — the response to Munich must be calibration, not escalation.