On May 19, 2025, President Trump signed the TAKE IT DOWN Act into law, criminalizing the publication of non-consensual intimate imagery (NCII) — including AI-generated deepfakes — and imposing a federal notice-and-takedown duty on covered online platforms. The criminal provisions took effect immediately. The platform takedown mandate, however, was given a one-year grace period. That clock has now run out. As of this month, covered platforms must remove flagged content within 48 hours of receiving a valid victim notice, with the Federal Trade Commission empowered to treat non-compliance as an unfair or deceptive practice.
The underlying harm the law targets is real, growing, and disproportionately borne by women and minors. Synthetic intimate imagery has moved from niche forum content to a click-of-a-button consumer product. A federal floor for victim relief — backed by criminal penalties for offenders and a fast removal path for platforms — is a legitimate policy goal, and the bipartisan coalition behind the bill (Senators Ted Cruz and Amy Klobuchar lead-sponsored it) deserves credit for moving on a genuine gap in federal law.
But good intentions do not immunize a statute from its design flaws. And the Take It Down Act has one structural problem that the next year of enforcement will magnify: it imports the speed of the DMCA's notice-and-takedown regime without importing its safeguards.
What the law actually requires
The statute creates two parallel tracks. The first is criminal: knowingly publishing non-consensual intimate imagery — or a digital forgery that is indistinguishable from such imagery — of an identifiable individual is now a federal offense, with enhanced penalties when minors are involved.
The second is administrative. "Covered platforms" — broadly, user-generated content services accessible to the US public — must establish a notice mechanism, and once a victim (or their authorized representative) submits a request identifying the content and asserting non-consent, the platform has 48 hours to remove the content and make reasonable efforts to remove identical copies. The FTC enforces compliance.
What the law does not include is just as important as what it does:
- No mandatory counter-notice procedure for the uploader, unlike Section 512(g) of the DMCA.
- No penalty for knowingly false or abusive notices — no analog to DMCA's §512(f).
- No carve-out for clearly lawful content such as journalism, parody, or commentary on already-public material.
- No safe harbor for platforms that, in good faith, decline to remove content they have a reasonable basis to believe is lawful.
Why the speech risk is structural, not hypothetical
The Electronic Frontier Foundation, the Center for Democracy & Technology, and the ACLU all warned during the legislative process that this design creates predictable over-removal pressure. The economics are straightforward: a platform that wrongly leaves up unlawful content faces FTC action and reputational damage; a platform that wrongly removes lawful content faces, under this statute, nothing. When the legal asymmetry is that stark, automated removal becomes the rational corporate response — and automated removal is, by every empirical study of the DMCA we have, prone to false positives, gaming, and weaponization against critics, ex-partners, and journalists.
The 48-hour window compounds the problem. It is too short for meaningful human review at scale, especially for smaller platforms without trust-and-safety infrastructure. Larger platforms will lean harder on classifiers; smaller ones will simply remove on notice and ask questions later, if at all.
The lesson of two decades of DMCA practice is that a takedown regime without a credible counter-notice and a penalty for abusive notices does not just risk over-removal — it guarantees it.
A better path is available, and Congress knows it
None of this is an argument for inaction on NCII. It is an argument for the FTC, in its forthcoming compliance guidance, to do the work Congress declined to do — and for Congress to fix the statute on the next legislative vehicle.
Three proportionate adjustments would preserve the law's victim-protection core while substantially reducing collateral speech damage:
- Encourage a notice-quality standard. FTC guidance should make clear that platforms acting in good faith on facially deficient notices — anonymous, unverified, or targeting obviously newsworthy content — are not in violation. The statute's "good faith" language gives the agency room here.
- Push voluntary counter-notice adoption. Platforms that build a counter-notice path and a reinstatement option for wrongly-flagged content should be treated as exemplars in FTC enforcement priorities, not penalized for the brief delay.
- Legislate a §512(f) analog. Knowingly false takedown notices targeting lawful speech should carry a private right of action. This is the single most effective deterrent against weaponization, and it costs the genuine-victim use case nothing.
The pro-innovation stake
The US still has the world's most speech-protective online liability framework, and that framework — Section 230 plus the First Amendment — is a significant reason American platforms host the world's public conversations. Eroding it with a fast, asymmetric takedown regime risks something larger than the immediate over-removal: it normalizes the European notice-and-action logic that the US has, until now, mostly resisted.
The Take It Down Act addresses a real harm. The next year of FTC implementation will determine whether it does so in a way American free expression law can live with, or whether it becomes the template for every future content-removal mandate Congress writes. The agency, and the courts that will inevitably review the first FTC enforcement action, should treat the law's silences on counter-notice and abuse penalties as bugs to be mitigated — not features to be replicated.