Privacy and digital rights advocates are alerting the laws many expect them to cheer on: federal crackdown on deep litigation generated by revenge porn and AI.
The newly signed Take it Down Act makes it illegal to publish involuntary explicit images (real or AI-generated) and provides only 48 hours of platform to match victims’ request for revocation or face-to-face liability. Despite widespread praise for long-term victories, experts also warn that its vague language, loose standards for validating claims and tight windows of compliance may pave the way for scrutiny of beyond, scrutiny of legal content, and even surveillance.
“Massive content review is a broad issue, always in a voice that is important and necessary,” India McKinney, director of federal affairs at the digital rights group Electronic Frontier Foundation, told TechCrunch.
Online platforms have a year to build a process of deleting involuntary intimate images (NCII). While the law requires that the request for revocation comes from the victim or its representative, it only requires physical or electronic signatures – no photo ID or other form of verification is required. This may be designed to reduce barriers to victims, but may create opportunities for abuse.
“I really want to miss this, but I think there will be more demands for deleting images that depict queer and trans people, and even more importantly, I think it will be consensus pornography,” McKinney said.
Marsha Blackburn (R-TN), a co-sponsor of Take It Down Act, also sponsored the Children’s Online Safety Act, which puts responsibility on the platform to protect children from harmful content online. Blackburn said she believes content related to trans people is harmful to children. Similarly, the Heritage Foundation (the conservative think tank behind 2025 Project Project Project Shind Tank) also stated: “Staying trans content away from children is protecting children.”
Since the platform does not delete the image within 48 hours of receiving the request, it is faced with responsibility, “by default, they just delete it without conducting any investigation to see if this is actually whether NCII is NCII, or if it is another type of protected speech, or even related to the person making the request.”
Both Snapchat and Meta said they support the law, but neither responded to TechCrunch’s request for more information on how they would verify whether the person requesting the revocation was a victim.
Mastodon is a decentralized platform, Mastodon, hosts flagship servers that others can join, telling TechCrunch that it will tend to tear down if it is difficult to verify victims.
Mastodon and other decentralized platforms, such as those from Bluesky or Pixelfed, may be particularly susceptible to the cold effects of the 48-hour evacuation rule. These networks rely on independent servers, usually operated by nonprofit organizations or individuals. Under the law, the FTC may consider any platform that does not meet the revocation requirements as “unfair or deceptive behavior or practice” even if the host is not a commercial entity.
“It is disturbing, but at this moment, the FTC Chairman took unprecedented steps to political institution and explicitly promised to use the agency’s power to punish ideological platforms and services, with the principle-based foundations, “based on Cyber Civerariantive” initiative, a kind of ineffective claim, propagating a claim and stating a typical claim.
Active monitoring
McKinney predicts that the platform will start regulating content before spreading it, so there are fewer posts in the future that there are problems.
Platforms are already using AI to monitor harmful content.
Kevin Guo, CEO and co-founder of AI-generated content detection startup Hive, said his company uses an online platform to detect deep fruit and child sexual abuse materials (CSAM). Some of Hive’s clients include Reddit, Giphy, Vevo, Bluesky and Bereal.
“We are actually one of the tech companies that endorse the bill,” Guo told TechCrunch. “This will help solve some very important issues and force these platforms to adopt solutions more aggressively.”
Hive’s model is a software-as-a-service, so the startup has no control over how the platform uses its products to tag or delete content. But Guo said many customers plug in Hive’s API when uploading so that they can be sent to the community before anything is sent.
A Reddit spokesperson told TechCrunch that the platform uses “complex internal tools, processes and teams to resolve and remove” NCII. Reddit is also working with nonprofit SWGFL to deploy its Stopncii tool, which scans real-time traffic to known NCII databases and removes accurate matches. The company did not share how to ensure that the person who requested the revocation was the victim.
McKinney warned that such surveillance may expand to encrypted messages in the future. Although the law focuses on public or semi-public communication, it also requires the platform to “delete and make reasonable efforts to prevent re-uploading” to prevent re-uploading. She believes that this can incentivize active scanning of all content even in an encrypted space. The law does not contain any engravings for end-to-end encrypted messaging services such as WhatsApp, Signal, or Imagessage.
Meta, Signal, and Apple have not responded to TechCrunch’s requests for more information about their encrypted messaging plans.
A broader meaning of speech
On March 4, Trump delivered a joint speech to Congress, praising the Knockdown Act and saying he was looking forward to signing it into law.
“I will use the bill myself if you don’t mind,” he added. “No one is doing worse online than I do.”
Not everyone is joking when viewers laugh at the comments. Trump is not shy about suppressing or retaliating against adverse speeches, whether it is promoting mainstream media “the enemy of the people”, but despite court orders to raise funds from NPR and PBS, it has banned the Associated Press from the Oval Office.
On Thursday, the Trump administration banned Harvard from accepting admissions from foreign students and escalated the conflict that Harvard refused to comply with Trump’s request to change its courses and eliminate DEI-related content. In retaliation, Trump has frozen federal funds to Harvard and threatened to revoke the university’s tax-free status.
As we have seen school boards attempt to ban books, we have seen some politicians who are very clear about the types of content they don’t want people to see, whether it’s key race theory or abortion theory, abortion information about climate change or information… For our past work on content adaptability, it’s very uncomfortable for us to observe that the members of the two advocate for content on this scale in both respects.