US deepfake regulation

The Take It Down Act Goes Live: A Well-Meaning Law With a Speech Problem

The 48-hour takedown mandate is now enforceable. Without counter-notice protections, the US risks copying the DMCA's worst over-removal dynamics.

Take It Down Act: Key Numbers People of Internet Research · US 48 hrs Platform takedown window Time to remove flagged content aft… 1 year Grace period before enforcement Platform mandate took effect May 2… 2 Lead Senate sponsors Bipartisan: Ted Cruz (R-TX) and Am… 0 DMCA-style counter-notice No mandatory uploader counter-noti… peopleofinternet.com

Key Takeaways

On May 19, 2025, President Trump signed the TAKE IT DOWN Act into law, criminalizing the publication of non-consensual intimate imagery (NCII) — including AI-generated deepfakes — and imposing a federal notice-and-takedown duty on covered online platforms. The criminal provisions took effect immediately. The platform takedown mandate, however, was given a one-year grace period. That clock has now run out. As of this month, covered platforms must remove flagged content within 48 hours of receiving a valid victim notice, with the Federal Trade Commission empowered to treat non-compliance as an unfair or deceptive practice.

The underlying harm the law targets is real, growing, and disproportionately borne by women and minors. Synthetic intimate imagery has moved from niche forum content to a click-of-a-button consumer product. A federal floor for victim relief — backed by criminal penalties for offenders and a fast removal path for platforms — is a legitimate policy goal, and the bipartisan coalition behind the bill (Senators Ted Cruz and Amy Klobuchar lead-sponsored it) deserves credit for moving on a genuine gap in federal law.

But good intentions do not immunize a statute from its design flaws. And the Take It Down Act has one structural problem that the next year of enforcement will magnify: it imports the speed of the DMCA's notice-and-takedown regime without importing its safeguards.

What the law actually requires

The statute creates two parallel tracks. The first is criminal: knowingly publishing non-consensual intimate imagery — or a digital forgery that is indistinguishable from such imagery — of an identifiable individual is now a federal offense, with enhanced penalties when minors are involved.

The second is administrative. "Covered platforms" — broadly, user-generated content services accessible to the US public — must establish a notice mechanism, and once a victim (or their authorized representative) submits a request identifying the content and asserting non-consent, the platform has 48 hours to remove the content and make reasonable efforts to remove identical copies. The FTC enforces compliance.

What the law does not include is just as important as what it does:

Why the speech risk is structural, not hypothetical

The Electronic Frontier Foundation, the Center for Democracy & Technology, and the ACLU all warned during the legislative process that this design creates predictable over-removal pressure. The economics are straightforward: a platform that wrongly leaves up unlawful content faces FTC action and reputational damage; a platform that wrongly removes lawful content faces, under this statute, nothing. When the legal asymmetry is that stark, automated removal becomes the rational corporate response — and automated removal is, by every empirical study of the DMCA we have, prone to false positives, gaming, and weaponization against critics, ex-partners, and journalists.

The 48-hour window compounds the problem. It is too short for meaningful human review at scale, especially for smaller platforms without trust-and-safety infrastructure. Larger platforms will lean harder on classifiers; smaller ones will simply remove on notice and ask questions later, if at all.

The lesson of two decades of DMCA practice is that a takedown regime without a credible counter-notice and a penalty for abusive notices does not just risk over-removal — it guarantees it.

A better path is available, and Congress knows it

None of this is an argument for inaction on NCII. It is an argument for the FTC, in its forthcoming compliance guidance, to do the work Congress declined to do — and for Congress to fix the statute on the next legislative vehicle.

Three proportionate adjustments would preserve the law's victim-protection core while substantially reducing collateral speech damage:

The pro-innovation stake

The US still has the world's most speech-protective online liability framework, and that framework — Section 230 plus the First Amendment — is a significant reason American platforms host the world's public conversations. Eroding it with a fast, asymmetric takedown regime risks something larger than the immediate over-removal: it normalizes the European notice-and-action logic that the US has, until now, mostly resisted.

The Take It Down Act addresses a real harm. The next year of FTC implementation will determine whether it does so in a way American free expression law can live with, or whether it becomes the template for every future content-removal mandate Congress writes. The agency, and the courts that will inevitably review the first FTC enforcement action, should treat the law's silences on counter-notice and abuse penalties as bugs to be mitigated — not features to be replicated.

Sources & Citations

  1. TAKE IT DOWN Act text (S.146, 119th Congress)
  2. White House: President Trump signs TAKE IT DOWN Act
  3. EFF analysis: TAKE IT DOWN Act threats to lawful speech
  4. Center for Democracy & Technology statement on TAKE IT DOWN Act
  5. FTC authority under Section 5 (unfair/deceptive practices)
Share this analysis: