This month, the compliance clock runs out on the TAKE IT DOWN Act, the federal law signed by President Trump in May 2025 that criminalizes the publication of non-consensual intimate imagery — including AI-generated deepfakes — and obligates online platforms to remove flagged content within 48 hours of a victim's request. From May 2026 onward, Meta, X, TikTok, Snap, Reddit, and effectively every covered service that hosts user-generated content must have a notice-and-takedown system in place, with the Federal Trade Commission empowered to bring enforcement actions against platforms that fail to comply.
The underlying harm is real, and the political consensus behind the bill — co-sponsored by Senators Ted Cruz and Amy Klobuchar and passed with overwhelming bipartisan support — reflects how badly Congress wants to be seen acting on AI-enabled abuse. Generative tools have made it trivially cheap to fabricate sexual images of identifiable people, and the victims, disproportionately women and minors, have until now faced a hostile patchwork of state laws and indifferent platform processes. A federal floor for victim relief is overdue.
But the architecture Congress chose to deliver that relief deserves a harder look than it received on the way to the President's desk.
What the law actually requires
The TAKE IT DOWN Act does two things. First, it creates federal criminal liability for knowingly publishing non-consensual intimate visual depictions of identifiable adults and minors, including computer-generated imagery indistinguishable from a real person. Second, it imposes a notice-and-takedown duty on "covered platforms" — defined broadly to include social networks, image hosts, and any service that primarily provides a forum for user-generated content.
Within one year of enactment, covered platforms must establish a process by which a victim (or their representative) can request removal, and they must remove the flagged content — along with any identical copies — within 48 hours of a valid request. The FTC enforces compliance under its unfair-or-deceptive-practices authority.
Where the design goes wrong
The intent is unimpeachable. The mechanism, however, borrows the worst features of the DMCA's notice-and-takedown regime without inheriting its hard-won procedural safeguards. Three problems stand out.
1. A 48-hour clock invites over-removal. Platforms facing an FTC enforcement action for non-compliance will rationally err on the side of taking content down first and asking questions later. The Electronic Frontier Foundation, the Center for Democracy & Technology, and the Cyber Civil Rights Initiative — the latter not a free-speech absolutist but the leading advocacy group for victims of image-based abuse — all flagged this exact risk during the bill's markup. Unlike the DMCA, the statute contains no counter-notice procedure and no penalty for materially false takedown requests. A bad-faith claimant can have lawful speech removed and faces no statutory consequence.
2. The definition reaches further than the headlines suggest. The law covers "intimate visual depictions" generated to appear indistinguishable from an authentic image. In practice, platform trust-and-safety teams will not adjudicate the photorealism question in 48 hours. The path of least legal risk is to remove anything plausibly within scope — including satire, political commentary, and journalism that uses manipulated imagery to critique public figures.
3. Compliance costs scale poorly. Meta, Google, and TikTok already operate large content-moderation operations and can absorb the new workflow. A mid-sized forum, a fediverse instance, or a small image-board cannot. The statute's broad "covered platform" definition does not meaningfully distinguish a billion-user service from a hobbyist community, and the lack of a small-platform carve-out will tilt the market further toward incumbents — the opposite of what a healthy speech ecosystem needs.
What proportionate regulation would look like
None of these objections require abandoning the law. They point toward a narrower, sturdier design that the FTC's implementing guidance — and Congress, in any future amendment — should adopt:
- A counter-notice and reinstatement procedure, modeled on 17 U.S.C. § 512(g), giving uploaders a fast path to challenge mistaken removals.
- Penalties for knowingly false takedown demands, analogous to § 512(f), to deter weaponization.
- A tiered compliance regime, with lighter obligations for platforms below a user or revenue threshold, similar to the structure adopted in the EU's Digital Services Act.
- Transparency reporting, requiring covered platforms to publish aggregate takedown statistics so the public can see whether the system is functioning as intended or being abused.
The bigger picture
The United States has spent the last decade arguing, correctly, that Section 230 and a light-touch intermediary-liability regime are foundational to American leadership in digital services. The TAKE IT DOWN Act is not, by itself, a repeal of that posture. But it is the first federal statute to impose a hard takedown clock on lawful user-generated content services, and it will become a template. The Kids Online Safety Act, pending AI-labeling proposals, and a wave of state-level deepfake bills are all watching what happens next.
If May 2026 produces visible victim relief without a measurable spike in wrongful removals, the model will spread. If it produces the DMCA's familiar pattern of opportunistic abuse — and there is no procedural reason to expect otherwise — the next law will be harder to course-correct. Getting the FTC's implementing guidance right is now the most consequential intermediary-liability decision in the United States this year.