A horrific new story hits our feeds every day. Social media platforms and other online spaces are drowning in a flood of abusive generative artificial intelligence (GenAI) content. Like with anything on the internet, users are creating a broad range of content, from entertaining to endearing and from insightful to brain-rotting slop. Sadly, a tremendous and growing quantity of visual content created with GenAI tools and distributed online is abusive, exploitative, and often unlawful.
There are extensive public reports that GenAI tools, including Elon Musk’s Grok and other advanced AI tools, are being used to create sexually suggestive or explicit “deepfake” images and videos. Malicious users of these technologies have created sexualized images and videos of nonconsenting celebrities and ordinary individuals and even depictions of the sexual abuse of children.
The legal landscape around online privacy and artificial intelligence is changing rapidly. The TAKE IT DOWN Act is the most recent federal response to this crisis. Additional bipartisan legislation is pending, including the ENFORCE Act, that would apply the same criminal penalties to creators and distributors of child sexual abuse material whether the material was created using the victim directly or using GenAI.
As the sexualized abuse of GenAI tools continues to proliferate, platforms have growing obligations to protect the public, law enforcement authorities have a growing appetite to seek accountability, and victims have a growing number of legal tools they can use to protect themselves and fight back.
The TAKE IT DOWN Act
The TAKE IT DOWN Act prohibits knowingly posting, or threating to post, intimate images or AI-generated deepfakes of another person without consent. The law applies regardless of whether the content is authentic or computer-generated. Violations of the act carry criminal penalties of up to two years of imprisonment for offenses involving adults and up to three years for offenses involving minors, along with fines, forfeiture, and mandatory restitution.
The TAKE IT DOWN Act also requires platforms that host user-generated content to implement formal notice-and-removal procedures allowing individuals to notify the platform of the image and request its removal by May 16, 2026. A valid takedown request must be submitted in writing and include:
- A physical or electronic signature
- Identification of the person in the image
- Information to locate the image
- A brief good-faith statement that the image was shared without consent along with any relevant supporting information
- Contact information for follow-up
Upon receipt of a valid takedown request, covered platforms must remove the content, and any identical copies, as soon as possible, but no later than 48 hours. The Act provides safe-harbor protections for platforms that act in good faith, even if content is later determined to be lawful.
FTC Enforcement Authority and Section 230 Shift
The Federal Trade Commission (FTC) is charged with enforcing the TAKE IT DOWN Act’s notice-and-removal requirements. In the past, online platforms relied heavily on immunity under Section 230 of the Communications Decency Act from liability stemming from user-generated content. The TAKE IT DOWN Act narrows that protection by imposing affirmative compliance obligations and authorizing the FTC to treat a platform’s failure to comply as an unfair or deceptive act or practice under federal consumer protection law.
Violations expose covered platforms—including nonprofit entities—to civil penalties, injunctive relief, and other remedies available to the FTC under its current authorities including the Federal Trade Commission Act. The FTC Act does not create an express private right of action for individuals victimized by nonconsensual sexually explicit GenAI content.
State Legislative and Regulatory Responses
Alongside recent federal action, state attorneys general have begun using existing consumer protection, child safety, and criminal enforcement authorities to pressure platforms and AI developers to curb the creation and distribution of nonconsensual AI-generated intimate images.
- A bipartisan coalition of 35 state and territorial attorneys general demanded that xAI take additional action to prevent its chatbot, Grok, from generating nonconsensual intimate images and child sexual abuse material, and urged removal of already produced exploitative content.
- Several state agencies announced that they are evaluating the availability of state civil and criminal remedies for residents affected by nonconsensual AI-generated sexual images.
- The coalition urged xAI to (i) disable Grok’s ability to produce nonconsensual intimate images and child sexual abuse material, (ii) eliminate existing exploitative content, (iii) take action against users generating illegal content, and (iv) provide users with control over whether their content can be altered by Grok.
Recent Enforcement Actions and Global Regulatory Pressure
Although the TAKE IT DOWN Act’s formal compliance deadline is still months away, recent developments at the state, federal, and international levels underscore the urgency for platforms and AI developers to put in place notification and takedown protocols:
- French cybercrime police reportedly executed searches of the Paris offices of X as part of an investigation of Grok’s use to create and disseminate child sexual abuse material.
- Governments in Indonesia and Malaysia temporarily blocked access to Grok, citing ineffective safeguards and violations of national online-safety and obscenity laws.
- Regulators in the United Kingdom, the European Union, India, and Australia have announced investigations or signaled potential enforcement actions under their respective online-safety frameworks.
- In the United States, advocacy organizations have publicly called on the Department of Justice and the FTC to investigate AI-enabled platforms under existing child sexual abuse material laws in addition to the TAKE IT DOWN Act.
These actions illustrate a broader regulatory trend: Governments are increasingly unwilling to wait for post hoc content moderation and are instead demanding proactive safeguards, accountability mechanisms, and governance controls for AI-driven platforms.
Depending on the jurisdiction, regulators may impose civil penalties, injunctive relief, platform access restrictions or bans, compliance mandates, and, in some cases, criminal liability under existing consumer protection, online safety, and child exploitation laws.
Practical Implications for Companies
For companies that host user-generated content or deploy generative AI tools, these developments carry immediate implications:
- Compliance planning should begin now. Waiting until the May 2026 deadline increases enforcement and reputational risk.
- Notice-and-removal workflows must be operational, documented, and tested, not merely drafted.
- AI safety guardrails, monitoring, and escalation procedures are increasingly viewed as baseline expectations rather than optional best practices.
- Global operations face compounded risk, as foreign regulators may act faster and more aggressively than US authorities.
Legal Options for Victims and Their Parents and Guardians
Individual victims and the parents and guardians of children who are depicted in sexually abusive GenAI content have a growing array of legal tools to seek removal of the material from platforms and accountability for those who create the abusive materials, as well as the developers of the tools used to create the material and the platforms that host it:
- Civil lawsuits against platforms and GenAI developers under existing laws and legal theories
- Formal notice-and-removal requests to covered platforms requiring the removal of nonconsensual intimate images and known identical copies within statutory timelines
- Court orders limiting further distribution of the material or requiring compliance with removal obligations
- Reporting of violations to law enforcement authorities, including the Department of Justice, where conduct may constitute federal crimes involving nonconsensual intimate images, digital forgeries, or child sexual abuse material
Zachary A. Myers is a former United States Attorney and member of the US Attorney General’s Child Exploitation Working Group. He spent 10 years as an Assistant US Attorney leading many investigations and prosecutions of child exploitation and sex trafficking, as well as other cyber- and technology-facilitated crimes. For more information about how you or your company can navigate the changing legal landscape surrounding the abuse of GenAI tools, please contact Zach or any member of McCarter & English’s Cybersecurity & Data Privacy team.
*Donnie Oliver, a law clerk at McCarter not yet admitted to the bar, contributed to this alert.
