On 2 August 2026, the EU AI Act's obligations for high-risk AI systems begin to bite — and few categories carry as much symbolic weight as the one tucked into Annex III, point 8(a): AI tools 'intended to be used by a judicial authority or on their behalf to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts.' Twenty-four months after the Regulation entered into force in August 2024, member state courts, justice ministries and the vendors who supply them now have to demonstrate conformity for software that, in many jurisdictions, has been quietly deployed for years.
The European Commission's AI Office, working alongside the Council of Europe's CEPEJ (European Commission for the Efficiency of Justice), has been issuing implementation guidance through early 2026 to help national administrations interpret what 'assisting a judicial authority' actually means in practice. The honest answer is: a lot more than anyone initially assumed.
The Scope Problem
Annex III, point 8(a) is one of the broadest entries in the high-risk catalogue. Read literally, it captures everything from sentencing-support dashboards and recidivism-risk scoring to the generative-AI legal research assistants that clerks in Paris, Madrid and Warsaw have already integrated into their workflows. It arguably reaches case-management triage systems that flag urgent filings, anonymisation tools for published judgments, and translation engines used to read foreign-language evidence.
The Act's Recital 61 tries to narrow this, clarifying that the high-risk classification 'should not extend to AI systems intended for purely ancillary administrative activities that do not affect the actual administration of justice' — citing examples like document anonymisation, internal communications and resource allocation. But the line between 'ancillary' and 'substantive' is doing enormous work, and CEPEJ's guidance has so far stopped short of providing a clean taxonomy. Justice ministries in several member states have spent the spring conducting hurried inventories of every tool a judge might touch, unsure which side of the line each falls on.
Why Proportionality Matters Here
Holding genuinely consequential systems — particularly those that score defendants for risk of reoffending, or recommend pretrial detention — to the AI Act's full high-risk regime is unambiguously the right call. The Council of Europe's 2018 European Ethical Charter on the Use of AI in Judicial Systems already laid out the principles: respect for fundamental rights, non-discrimination, quality and security, transparency, and the cardinal rule that any tool be 'under user control'. The Act's requirements for risk management, data governance, human oversight, technical documentation and post-market monitoring (Articles 9–17) are a sensible operational translation of those principles.
But the scope creep risk is real. If every chatbot a clerk uses to summarise a 200-page filing, or every search engine that ranks precedent, is treated as 'high-risk' — with the conformity assessments, registration in the EU database, fundamental-rights impact assessments and post-market surveillance plans that designation triggers — courts will either stop using the tools that have made them faster and more consistent, or quietly continue using them in a permanent state of non-compliance. Neither outcome serves litigants.
What Good Implementation Looks Like
The AI Office and CEPEJ have an opening to get this right with targeted, plain-language guidance. Three principles should drive it:
- Function over form. A general-purpose LLM that a judge uses to draft a procedural order is not the same thing as a bespoke recidivism-scoring system trained on conviction data. Guidance should focus on the specific function performed in the judicial workflow, not the underlying technology stack.
- Human-in-the-loop as the default safe harbour. Where a judge retains genuine, documented decisional authority — and the tool's output is one input among many — the compliance burden should be calibrated downward. Article 14 of the Act already centres human oversight; CEPEJ guidance should make clear that meaningful oversight materially reduces residual risk.
- Open ecosystems matter. European legal-tech startups and open-source legal research projects cannot absorb the same compliance overhead as Thomson Reuters or LexisNexis. A regulatory regime that effectively reserves judicial AI to a handful of incumbents would entrench exactly the market concentration the Commission claims to oppose.
The Stakes
Penalties for non-compliance with high-risk obligations run up to €15 million or 3% of worldwide annual turnover (Article 99), with higher tiers for prohibited-practice violations. Few public-sector procurement contracts allocate that risk cleanly between courts and vendors, and litigation over who bears the conformity-assessment burden is almost certain.
Civil society groups including the Electronic Frontier Foundation have rightly pushed for strong fundamental-rights protections in the EU's digital rulebook. Those protections are most credible when they target genuine harms — opaque risk scoring, automated denial of liberty, discriminatory pattern detection — rather than the prosaic research aids that help an overworked judiciary deliver decisions faster.
August 2026 is not the end of this conversation. It is the beginning of a multi-year stress test of whether Europe can regulate AI in its most sensitive public-sector deployment without strangling the tools that make justice more accessible. The AI Office's next round of guidance will tell us a great deal about which path it has chosen.