One year after President Trump signed the Take It Down Act into law in May 2025, its core obligation — that covered platforms remove flagged non-consensual intimate imagery (NCII), including AI-generated deepfake nudes, within 48 hours of a victim's notice — is now fully in force. The Federal Trade Commission, designated as the enforcement agency, can now treat non-compliance as an unfair or deceptive practice under Section 5 of the FTC Act. For social media services, image hosts, and a broad swath of consumer-facing platforms, May 2026 marks the end of the grace period and the beginning of real legal exposure.
The law's goal is unambiguously good. The proliferation of generative-AI "nudify" apps and the targeting of teenagers — including high-profile incidents in schools across the United States — produced a rare moment of bipartisan consensus. Sponsored by Senators Ted Cruz and Amy Klobuchar and championed publicly by First Lady Melania Trump, the bill cleared both chambers with near-unanimous support. That coalition reflects a genuine harm: synthetic intimate imagery is cheap to produce, devastating to victims, and historically slow to come down.
What the Act actually requires
The statute does two distinct things, and the difference matters. First, it creates federal criminal liability for the knowing publication of NCII, including computer-generated imagery that is indistinguishable from a real person. Second, and more operationally consequential for the tech sector, it imposes a notice-and-removal duty on "covered platforms" — broadly, online services that host user-generated content available to the public.
Once a platform receives a valid request from an identifiable victim (or their authorized representative), it has 48 hours to remove the content and make reasonable efforts to identify and remove identical copies. Platforms must designate a point of contact and publish a clear notice process. Crucially — and unusually for a content-moderation statute — enforcement runs through the FTC, not through private rights of action against platforms, which limits the most obvious litigation-abuse risk.
The case for proportionate optimism
From an innovation-policy standpoint, there is a lot to like in the architecture Congress chose. A federal floor preempts a chaotic patchwork of state-level deepfake bills, each with its own definitions and timelines. The 48-hour clock is aggressive but achievable for the major platforms, most of which already operate trust-and-safety workflows for the analogous problem of non-AI NCII under voluntary programs like StopNCII.org. And the FTC-only enforcement model avoids turning every disputed takedown into a federal lawsuit.
Compare this with the European trajectory under the Digital Services Act, where overlapping obligations on illegal content, systemic-risk assessments, and national digital-services coordinators have created compliance overhead that mid-sized platforms struggle to absorb. The Take It Down Act, by contrast, targets a specific, identifiable category of harm with a specific remedy.
Where implementation can still go wrong
That said, the next twelve months will determine whether the statute becomes a model or a cautionary tale. Three risks stand out.
1. Takedown abuse. The Digital Millennium Copyright Act's notice-and-takedown regime is the closest analogue, and decades of data show it is routinely weaponized — to silence critics, suppress journalism, and harass political opponents. The Electronic Frontier Foundation and the Cyber Civil Rights Initiative, while taking opposing views on the statute overall, both flagged that a 48-hour clock combined with FTC pressure creates strong incentives for platforms to remove first and ask questions later. Bad-faith notices targeting satire, journalism about public figures, or even consensual adult content mislabeled as NCII are predictable. Robust counter-notice procedures, transparency reporting, and explicit safe harbors for good-faith retention of clearly lawful content are essential.
2. Smaller platforms and new entrants. A 48-hour SLA is straightforward for Meta, Google, and Snap. It is non-trivial for a five-person startup, an open-source forum, or a federated service. The FTC's compliance guidance should distinguish between platforms by scale and risk, in the spirit of the DSA's very-large-online-platform tiering — without creating a compliance moat that entrenches incumbents.
3. End-to-end encrypted services. The statute applies to publicly available content, not private messaging, but the line between a "public" channel and a "private" group of 10,000 will be contested. Regulators should affirm that the Act does not require platforms to undermine end-to-end encryption or scan private messages — a position consistent with the bipartisan rejection of client-side scanning mandates in prior debates.
What good enforcement looks like
The FTC has signaled a measured posture so far, emphasizing platform engagement over enforcement actions during the first year. That approach should continue. Targeted enforcement against bad actors — sites built explicitly to host deepfake nudes, or platforms that ignore valid notices — is appropriate. Aggressive enforcement against good-faith platforms making reasonable mistakes will chill exactly the kind of trust-and-safety investment the law is trying to encourage.
The Take It Down Act represents Congress at something close to its best: identifying a concrete harm, designing a narrow remedy, and avoiding the temptation to graft on broader content-control mandates. The next phase is on the FTC and the platforms. If both sides treat the 48-hour rule as a floor for victim protection rather than a ceiling for innovation, the United States will have built a deepfake-response regime worth exporting. If they don't, it will become another cautionary case study in how well-intentioned content rules calcify into compliance theatre.