WASHINGTON, D.C. — In a move to address the rise of AI-powered harassment and non-consensual online exploitation, U.S. President Donald Trump on Monday signed the “Take It Down Act” into law, making it a federal crime to share so-called revenge porn — including artificial intelligence-generated deepfakes — without consent.
The law, passed with broad bipartisan support in Congress, marks the first federal legislation explicitly targeting both real and synthetic intimate images shared without the subject’s permission.
“With the rise of AI image generation, countless women have been harassed with deepfakes and other explicit images distributed against their will,” Trump said during the signing ceremony at the White House Rose Garden. “And today we’re making it totally illegal.”
The new law mandates criminal penalties of up to three years in prison for individuals who intentionally share explicit content without consent.
It also imposes obligations on tech companies and online platforms, requiring them to remove such content within 48 hours of notification or face civil liability.
The First Lady, Melania Trump, who has remained largely out of the public eye during her husband’s presidency, made a rare appearance to endorse the law, calling it a “national victory” for families.
“This legislation is a powerful step forward in our efforts to ensure that every American, especially young people, can feel better protected from their image or identity being abused,” she said.
Tackling the Deepfake Crisis
The legislation comes amid mounting concern over the explosion of deepfake pornography, a phenomenon where AI tools are used to fabricate realistic nude images or explicit videos, often targeting women and minors.
The technology has been used to exploit celebrities, politicians, and ordinary individuals alike — with serious emotional and psychological consequences.
High-profile victims have included singer Taylor Swift and U.S. Congresswoman Alexandria Ocasio-Cortez, but experts warn that non-public individuals are even more vulnerable, especially teenagers.
Across several U.S. states, schools have reported disturbing incidents involving AI-generated explicit content created and circulated by students.
“This bill is a significant step in addressing the exploitation of AI-generated deepfakes and non-consensual imagery,” said Renee Cummings, an AI ethicist and criminologist at the University of Virginia. “Its effectiveness will depend on swift and sure enforcement, severe punishment for perpetrators, and real-time adaptability to emerging digital threats.”
Free Speech Concerns
While welcomed by many as a long-overdue legal shield for victims, the law has drawn criticism from some digital rights advocates who worry about its implications for free expression.
The Electronic Frontier Foundation (EFF), a nonprofit focused on civil liberties in the digital world, warned that the law could be misused.
“The Take It Down Act gives the powerful a dangerous new route to manipulate platforms into removing lawful speech that they simply don’t like,” the EFF said in a statement.
Under the law, websites and social media platforms must now implement robust procedures to process and act on takedown requests from victims of non-consensual imagery — an obligation some platforms may struggle to fulfill, especially smaller or decentralized ones.
‘A Legal Weapon’
For many victims and families, however, the law offers long-awaited recourse.
Dorota Mani, a mother whose daughter was victimized by non-consensual image sharing, called the bill “a very powerful” and empowering development.
“Now I have a legal weapon in my hand, which nobody can say no to,” she said.



