Brazil is doing something unusual in the global debate over AI and online harm: instead of starting from scratch with a sweeping new "AI safety" statute, lawmakers are extending an existing, harm-based criminal framework to cover synthetic non-consensual intimate imagery (NCII) and other AI-amplified forms of digital gender-based violence. Through 2025 and into 2026, the National Congress has advanced legislation criminalizing deepfake NCII, while the Superior Electoral Tribunal (TSE) has tightened rules on "violência política de gênero" content ahead of the 2026 elections.
The contrast with the rest of the world matters. Where the EU's AI Act and various US state bills lean heavily on ex-ante obligations for developers and platforms, Brazil is principally tightening the criminal code against the people who actually weaponize the technology. That is the more defensible policy instinct — and the part of Brazil's approach that the rest of the world should study carefully before importing the rougher edges.
A decade of harm-based statutes, now extended to AI
The new deepfake provisions do not arrive on a blank slate. They build on a fairly mature stack:
- Lei Carolina Dieckmann (Lei 12.737/2012) — Brazil's foundational cybercrime statute, enacted after an actress's private photos were stolen and leaked, established "invasão de dispositivo informático" as a federal crime.
- Lei 13.718/2018 — added Article 218-C to the Penal Code, criminalizing the non-consensual recording, production, or distribution of intimate sexual content, with aggravated penalties when committed against a current or former partner or for revenge.
- Lei 14.132/2021 — created the crime of perseguição (digital stalking) as Article 147-A, finally giving prosecutors a tool calibrated for repetitive online harassment that falls short of physical threat.
- Lei 14.188/2021 — codified violência psicológica contra a mulher (Article 147-B) and expanded the protections of the Lei Maria da Penha (Lei 11.340/2006) to explicitly cover online psychological violence.
The 2025-2026 deepfake legislation slots into this stack rather than displacing it: it makes clear that the synthetic origin of an image is not a defense, and that producing or distributing AI-generated nudes of an identifiable person without consent is criminal conduct on equal footing with sharing real intimate material. That is the right structural move. The harm — to dignity, reputation, employment, mental health — does not become less real because the pixels were generated by a diffusion model.
Why a harm-based approach beats a model-based one
The temptation in jurisdictions debating generative AI is to push liability up the stack: license the models, audit the training data, require pre-deployment risk assessments, fine the foundation-model developer. The intentions are good; the side effects are familiar. Compliance moats favor incumbents, open-source releases get chilled, and small Brazilian developers — exactly the kind of ecosystem Brazil has been trying to cultivate through its national AI strategy and BNDES funding lines — get priced out.
Brazil's current direction targets the conduct that produces the harm: making, sharing, or threatening to share synthetic intimate imagery of a real person without consent. That is a narrower, more proportionate intervention. It preserves the legality of the underlying tools — image generation, face-editing, voice cloning — which have legitimate uses across film, accessibility, journalism, and research. And it is consistent with the Marco Civil da Internet's (Lei 12.965/2014) layered approach to intermediary liability, in which platforms remove specific unlawful content on notice rather than serving as general gatekeepers of legality.
Where the picture gets more complicated: the TSE and the 2026 cycle
The harder set of questions sits on the electoral side. Through 2024 and into 2026, the TSE has expanded the concept of violência política de gênero and pressed platforms to act faster on content that allegedly constitutes it. The underlying problem is real: women candidates in Brazil report disproportionate harassment, deepfaked sexual content, and coordinated smear campaigns, and these dynamics suppress political participation.
But the regulatory tools are uncomfortably broad. Rules that effectively require platforms to identify and remove a fuzzy category of "political gender violence" within tight windows — under threat of fines — collapse two very different problems into one: (1) clearly unlawful deepfake NCII, where takedown is straightforward, and (2) harsh political criticism, satire, or memes that target women candidates, where takedown is a speech decision a tribunal should not delegate to a platform's trust and safety team on the clock.
The proportionate path is to keep the criminal track sharp and the platform-mandate track narrow: clear definitions, court-ordered takedowns for contested cases, transparency reports, and post-hoc accountability — not pre-election pressure to over-remove.
What the rest of the world should take from Brazil
Three things are worth exporting. First, that AI-specific harms are usually old harms in new clothing, and existing criminal statutes can often be extended rather than replaced. Second, that survivor-centered remedies — fast takedowns, civil damages, protective orders under Maria da Penha — matter more in practice than headline-grabbing model bans. Third, that intermediary liability should remain narrow: targeting the perpetrator first, the platform second, and the developer only where genuine negligence is shown.
If Brazil sticks to the harm-based logic that runs through its 2012-2021 statutes, the country will end up with one of the better-calibrated responses to deepfake abuse anywhere in the world. The risk is the electoral overlay — and that is where Congress, civil society, and platforms still have work to do before October.