TAKE IT DOWN Act
TAKE IT DOWN Act
The TAKE IT DOWN Act (signed May 2025) is the first U.S. federal law explicitly targeting harmful AI-generated imagery, criminalizing non-consensual deepfakes and mandating 48-hour platform takedowns.
The TAKE IT DOWN Act (Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act) is a U.S. federal law signed by President Donald Trump on May 19, 2025, that criminalizes the knowing publication or threatened publication of non-consensual intimate imagery (NCII)—including both authentic photographs and AI-generated deepfakes—and requires covered online platforms to remove such content within 48 hours of a verified victim request.1 It is the first federal law specifically targeting harmful AI use in intimate imagery.2
Quick Assessment
| Attribute | Detail |
|---|---|
| Full Name | Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act |
| Jurisdiction | United States (federal) |
| Signed into law | May 19, 2025 |
| Lead sponsors | Sen. Ted Cruz (R-TX), Sen. Amy Klobuchar (D-MN) |
| Enforcement body | Federal Trade Commission (FTC) |
| Platform compliance deadline | May 19, 2026 |
| Criminal penalty (adults) | Up to 2 years imprisonment |
| House vote | 409–2 |
| Senate vote | Unanimous |
Key Links
| Source | Link |
|---|---|
| Official Text | govinfo.gov |
| Congress.gov | S.146 |
Overview
The TAKE IT DOWN Act addresses one of the more direct and immediate harms arising from AI-generated imagery: the production and distribution of sexualized deepfakes depicting real, identifiable individuals without their consent. The law closes a significant gap in prior federal law, which lacked a nationwide prohibition specifically covering digitally fabricated intimate imagery. All 50 states had enacted some form of NCII protection prior to the Act, but many of those statutes did not cover AI-generated content or "digital forgeries," leaving victims of deepfake abuse without federal recourse.3
The Act operates on two main tracks. First, it creates a federal criminal offense: anyone who knowingly publishes or threatens to publish NCII—whether real or AI-generated—faces up to two years imprisonment, with harsher penalties when the victim is a minor. The law clarifies that consent to the creation of an image does not constitute consent to its distribution, a provision aimed squarely at sextortion scenarios.1 Second, it imposes affirmative platform obligations: covered services (websites, social media applications, and apps hosting user-generated content, excluding ISPs and email providers) must establish notice-and-takedown processes by May 19, 2026, and must act on verified victim requests within 48 hours, including making reasonable efforts to remove duplicate copies and reposts.1
Enforcement of the platform obligations falls to the Federal Trade Commission under existing Federal Trade Commission Act authority. Platforms acting in good faith receive safe harbor protections from liability. The law also includes carve-outs shielding medical professionals, law enforcement, journalists, and artistic works from prosecution, and explicitly preserves First Amendment protections for speech on matters of public concern.3
History
Triggering Incident
The Act's origins trace to a 2023 incident in Aledo, Texas, in which a high school student used AI software to manipulate innocent photographs of female classmates into nude images, which were then distributed anonymously via Snapchat. The incident drew significant public attention to the inadequacy of existing law in addressing AI-facilitated sexual abuse imagery targeting minors, and directly motivated Senator Ted Cruz to pursue federal legislation.4
Legislative Progression
Senator Cruz introduced the Senate bill (S. 4569) in April/June 2024, with Senator Amy Klobuchar of Minnesota as a leading bipartisan co-sponsor. A companion House bill (H.R. 8989) was introduced in June 2024. The bill attracted support from eleven additional senators across both parties, as well as endorsements from over 100 organizations including Microsoft, the NCAA, SAG-AFTRA, the National Organization for Women, IBM, the National Center for Missing and Exploited Children, and RAINN.5
The Senate passed the bill unanimously. The bill encountered delays in the House during the 118th Congress due to packaging complications with budget legislation, but the House ultimately passed it 409–2 in April 2025. President Trump signed the Act into law on May 19, 2025, at a ceremony attended by survivors and advocates including representatives from RAINN.5 First Lady Melania Trump was a prominent advocate for the legislation, hosting a White House roundtable on March 3, 2025, with victim Elliston Berry, a 14-year-old whose case of AI-generated nude images had drawn national attention.4
Implementation Timeline
Criminal provisions took effect immediately upon the Act's signing. Covered platforms have until May 19, 2026—one year post-enactment—to implement compliant notice-and-takedown systems. The FTC has not yet announced a detailed enforcement strategy as of early 2026, though legal advisors have urged platforms to document good-faith compliance efforts during the implementation period.2
Core Provisions
Criminal Liability
The Act makes it a federal crime to knowingly publish or threaten to publish NCII—defined to include both authentic images obtained without a reasonable expectation of privacy and AI-generated deepfakes depicting identifiable individuals in sexually explicit contexts—with the intent to harm, harass, humiliate, degrade, or arouse. The offense carries up to two years imprisonment for imagery involving adult victims, with heightened penalties when the victim is a minor. The law covers threats as well as actual publication, directly targeting sextortion schemes. It applies across interstate commerce, encompassing social media platforms, websites, and applications.1
Platform Takedown Obligations
Covered platforms—broadly defined to include websites, social media services, and apps that host user-generated content—must establish processes allowing victims to submit verified removal requests, typically requiring an electronic signature and location details sufficient to identify the content. Upon receiving a valid request, platforms have 48 hours to remove the reported content and must make reasonable efforts to identify and delete copies and reposts. Internet service providers and email services are excluded from coverage.1
The FTC enforces platform compliance as a matter of unfair or deceptive trade practices under the FTC Act, with jurisdiction extended to include nonprofit organizations. Platforms that document good-faith compliance efforts receive safe harbor protection against liability for both mistaken removals and for content they fail to identify.3
Protections and Carve-Outs
The Act includes explicit protections for good-faith disclosures by medical professionals, law enforcement, and related actors. It preserves First Amendment protections for journalism, artistic expression, and matters of public concern. It does not preempt existing state NCII laws, meaning that state and federal regimes continue to operate in parallel. The Act also does not create a private right of action for victims against platforms, limiting direct civil remedies to the FTC enforcement track.3
Relationship to AI Governance
The TAKE IT DOWN Act is notable as the first U.S. federal law explicitly restricting a specific harmful application of AI—the generation of non-consensual intimate imagery—rather than addressing AI governance in broad structural terms. In this sense it represents a narrowly targeted, harm-specific approach to AI regulation, distinct from comprehensive AI governance frameworks such as the EU AI Act or proposed U.S. bills like the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act.
The Act is also related to the DEFIANCE Act, which addresses civil remedies for victims of AI-generated NCII, and complements but does not supersede state-level legislation. Advocates described the Act as a targeted fix for a specific class of AI-enabled harm, rather than as comprehensive AI governance.2
Criticisms and Concerns
Despite near-unanimous congressional support, the Act has drawn criticism from civil liberties and technology policy organizations, primarily centered on risks to free expression and platform operations.
Censorship and Overreach Risks
The Electronic Frontier Foundation (EFF), the Cyber Civil Rights Initiative (CCRI), the Cato Institute, and Public Knowledge have raised concerns about the Act's potential for misuse. Critics argue that the 48-hour removal deadline creates strong pressure for platforms—especially smaller ones—to over-remove content rather than risk FTC enforcement, without adequate verification that reported content actually constitutes NCII. Unlike the Digital Millennium Copyright Act (DMCA), the Act lacks counternotice procedures, anti-abuse provisions, or restoration mechanisms for wrongly removed content.6
EFF and others warned that the broad definition of covered content, applicable across all user-generated content forums including private messaging applications, could be exploited to suppress satire, journalism, political speech, or commercial content unrelated to NCII. Critics also noted that FTC jurisdiction extended to nonprofits raises concerns about the potential politicization of enforcement.6
The CCRI, which otherwise supports the criminalization of NCII, specifically opposed the notice-and-takedown framework on constitutional grounds, arguing that it could suppress lawful speech and was drafted more broadly than necessary to address the targeted harm.6
Encryption and Privacy Concerns
Requirements that platforms make "reasonable efforts" to identify and remove copies of reported content have prompted concerns that compliance could require scanning private communications, potentially undermining end-to-end encryption in messaging applications.6
Enforcement Gaps
The Act does not preempt state laws, creating potential inconsistencies for national platforms operating across multiple jurisdictions, each with distinct NCII frameworks. The lack of a private right of action against non-compliant platforms also limits victim recourse outside the FTC enforcement channel. Compliance burdens on small platforms may be substantial given the requirement to document good-faith efforts and implement robust reporting systems within one year of enactment.3
Counterarguments
Proponents respond that the Act's explicit First Amendment carve-outs, its focus on knowing and intentional conduct, and the good-faith safe harbor for platforms provide meaningful protections against the most significant overreach risks. The near-unanimous congressional votes suggest broad consensus that the Act appropriately balances victim protection against free speech concerns, even if critics maintain the balance is imperfect. Advocates including RAINN and the National Center for Missing and Exploited Children characterized the Act as a long-overdue response to documented, severe harms, particularly to minors.7
Key Uncertainties
- Enforcement effectiveness: No post-enactment enforcement data is yet available. Whether the FTC will bring actions under the Act, and on what timeline, remains to be seen.
- Platform compliance: It is unclear how platforms—particularly smaller services—will implement compliant takedown systems by the May 2026 deadline, and whether automated removal systems will be prone to the over-removal concerns critics have raised.
- Encryption implications: The extent to which the "reasonable efforts" standard for duplicate removal will be interpreted to require scanning of end-to-end encrypted communications has not been resolved by the FTC or courts.
- Constitutional challenges: As of early 2026, no major legal challenges to the Act have been reported, but the constitutional questions raised by critics regarding vagueness and overbreadth have not been adjudicated.
- Interaction with state law: The absence of preemption leaves open questions about how the federal regime will interact with the varied landscape of state NCII statutes.
Sources
Footnotes
-
TAKE IT DOWN Act, S. 146, 119th Congress. Full text. ↩ ↩2 ↩3 ↩4 ↩5
-
IAPP. "TAKE IT DOWN Act: The next bipartisan US federal privacy, AI law." IAPP. ↩ ↩2 ↩3
-
Congressional Research Service. "The TAKE IT DOWN Act: A Federal Law Prohibiting the Nonconsensual Publication of Intimate Images." LSB11314. ↩ ↩2 ↩3 ↩4 ↩5
-
Senator Amy Klobuchar. "Klobuchar's Bipartisan TAKE IT DOWN Act Signed into Law." Press Release. ↩ ↩2
-
Electronic Frontier Foundation. "Politicians Rushed Through An Online Speech 'Solution.' Victims Deserve Better." EFF. ↩ ↩2 ↩3 ↩4