One year after President Trump signed the TAKE IT DOWN Act into law in May 2025, its core platform obligation goes live this week. From mid-May 2026, covered online platforms in the United States must remove flagged non-consensual intimate imagery — including AI-generated deepfakes — within 48 hours of a valid request, with enforcement vested in the Federal Trade Commission under its Section 5 unfair-or-deceptive-practices authority.
The underlying harm is real, urgent, and disproportionately borne by women and minors. Tools that strip clothing from photos or splice faces into pornographic videos have collapsed the cost of producing convincing intimate imagery to near zero. A targeted federal remedy for victims — who previously had to navigate a patchwork of state laws and indifferent platform processes — is overdue. The question now facing US intermediary-liability policy is not whether to act, but whether the mechanism Congress chose will actually protect victims without collateral damage to lawful speech, smaller platforms, and the open internet.
What the law actually requires
The TAKE IT DOWN Act criminalises the knowing publication of non-consensual intimate imagery (NCII), including computer-generated depictions of identifiable people, and layers on a separate civil-style takedown duty for "covered platforms." The covered-platform definition reaches consumer-facing services that primarily provide a forum for user-generated content — broad enough to capture not just the largest social networks but also smaller forums, fediverse instances, and niche communities.
The key operational requirements are straightforward on paper:
- A designated point of contact and an accessible reporting mechanism for victims.
- Removal of validly-reported NCII within 48 hours.
- "Reasonable efforts" to remove identical copies.
- FTC enforcement of the takedown duties, with violations treated as unfair or deceptive acts.
What it does not include is equally important. Unlike the Digital Millennium Copyright Act, there is no statutory counter-notice procedure that automatically restores content if the uploader pushes back. There is no robust penalty for knowingly false reports written into the takedown regime itself. And there is no carve-out scaling obligations to platform size or risk profile.
The overblocking problem
Notice-and-takedown regimes with short deadlines and asymmetric liability tend to produce one predictable outcome: when in doubt, platforms remove. This is not a hypothetical concern. The DMCA's experience over two decades, well documented in research by the Lumen database at Harvard's Berkman Klein Center, shows that copyright takedown notices are regularly used to suppress criticism, competition, and lawful commentary. Germany's NetzDG, the EU's Digital Services Act, and the UK's Online Safety Act have all faced similar critiques.
A 48-hour clock is shorter than NetzDG's 24-hour window for "manifestly unlawful" content but applies to a category — intimate imagery of identifiable people — where ground-truth verification is genuinely hard. Determining whether an image is consensual, whether the person depicted is the actual reporter, or whether a face-swap is convincing enough to count as a depiction of the reporter all require judgment that a queue-processing moderator working against an FTC deadline does not have.
The Electronic Frontier Foundation and the Center for Democracy & Technology raised exactly these concerns during the bill's passage, warning that the structure invites weaponisation: an aggrieved ex-partner, a political opponent, or a coordinated harassment campaign can file reports that platforms have strong incentives to honour first and verify later.
Where this hits hardest
Large platforms will absorb compliance costs. Meta, Google, Snap, and TikTok already operate global NCII removal workflows, including participation in the StopNCII.org hash-matching scheme run by the UK's Revenge Porn Helpline. For them, the marginal lift is process documentation, FTC reporting hooks, and faster triage queues.
The pressure falls on the tier below: mid-size forums, independent video hosts, federated services, and start-ups. Building a 24/7 trust-and-safety operation that can hit a 48-hour SLA, defend its decisions to the FTC, and process appeals is a non-trivial fixed cost. Civil-society groups including the R Street Institute and Stanford's Cyber Policy Center have noted that without a tiered approach, US intermediary policy risks entrenching incumbents — the opposite of what a competitive, innovative internet sector needs.
A better path: proportionate, evidence-based
The law is now in force; the policy task shifts to how the FTC implements it. A pro-innovation, victim-protective approach would emphasise four things.
First, FTC guidance should make explicit that good-faith errors do not give rise to Section 5 liability, and that reasonable verification steps — including using hash-matching infrastructure like StopNCII — satisfy the duty.
Second, the agency should publish data on reporting volumes, takedown accuracy, and abuse of process, building the empirical record that was conspicuously thin during legislative debate.
Third, Congress should revisit the statute to add a structured counter-notice and re-upload pathway, mirroring DMCA §512(g), and impose meaningful penalties on knowingly false reporters.
Fourth, smaller platforms below a defined user threshold should be given a longer compliance window or simplified obligations, modelled on the DSA's tiered approach for Very Large Online Platforms versus everyone else.
Protecting victims of image-based abuse and preserving a vibrant, competitive online speech environment are not in tension — but only if the implementation gets the details right. The next twelve months of FTC practice will determine which version of TAKE IT DOWN America ends up with.