Finding a deepfake of yourself online — particularly nonconsensual intimate imagery — is one of the more distressing things that can happen in a digital context. The content feels violating, the spread feels unstoppable, and the path to removal is not obvious. The good news is that legal tools and platform mechanisms have expanded significantly in recent years, and acting quickly with the right steps gives you the best chance of containing the damage.
What Makes a Deepfake Legally Actionable
A deepfake is synthetic media — an image, video, or audio recording — generated or manipulated by AI to make it appear that a real person said or did something they did not. The technology itself is not illegal. What determines legality is how the content is used and what harm it causes.
Deepfakes become legally actionable under several theories depending on the content and context. Nonconsensual intimate imagery — sometimes called deepfake pornography — is the most clearly addressed category and is criminalized or subject to civil liability in more than 20 states. Defamation applies when the deepfake presents false statements of fact that damage reputation, such as a fabricated video depicting someone committing a crime or making statements they never made. Harassment and cyberstalking statutes apply when the content is used to threaten, intimidate, or control. Impersonation laws apply when synthetic media is used to fraudulently represent someone's identity for financial or other gain. Right of publicity laws, which protect individuals' control over commercial use of their name, image, and likeness, apply when deepfakes are used commercially without consent.
The absence of consent is central to most deepfake claims. Content created as obvious satire or parody involving a public figure on a matter of public concern occupies different legal territory than nonconsensual intimate imagery of a private individual. The former has significant First Amendment protection. The latter does not.
Step One: Document Before You Do Anything Else
Before reporting, requesting removal, or contacting anyone, document the content thoroughly. This step feels counterintuitive when the instinct is to get the content down immediately, but documentation is what makes every subsequent step more effective. Platforms sometimes remove content quickly, which eliminates your evidence.
Take screenshots of the content itself, including any visible metadata, usernames, post dates, and URLs. Use your phone or a screen recording tool to capture video if the deepfake is a video. Record the full URL of every page where the content appears. Note the date and time you first discovered it and every platform it appears on. If the content was sent to you directly — via message, email, or airdrop — preserve those communications as well. Save everything to a location that is not dependent on the platform: a personal hard drive, cloud storage, or email to yourself.
This documentation serves multiple purposes. It is the evidentiary foundation for platform reports, police reports, and civil litigation. It establishes a timeline. It also creates a record that survives platform removal, which matters if you need to prove the content existed and caused harm.
Step Two: Report to the Platform
Every major platform — Meta, Google, X, TikTok, Reddit, Snapchat, and others — has reporting mechanisms for nonconsensual intimate imagery and impersonation. Some have dedicated deepfake reporting pathways. Reporting through the platform's official process is the fastest route to removal for most people and should happen in parallel with any legal steps, not after them.
When submitting a report, be specific. Identify the content as AI-generated and nonconsensual. Reference the platform's specific policy on synthetic intimate imagery or impersonation — most platforms prohibit it explicitly, and citing the relevant policy rather than making a general complaint routes your report to the right team. Attach your documentation. If the platform has a dedicated reporting form for nonconsensual intimate images, use it rather than a general abuse report.
Federal law now requires certain platforms to respond to reports of nonconsensual intimate imagery more quickly than standard content reports. The DEFIANCE Act, signed into law in 2024, created a federal civil cause of action for victims of nonconsensual intimate deepfakes and signaled to platforms that this category of content carries heightened legal and regulatory attention. Platforms have responded by accelerating their internal processes for this content type.
If a platform does not act within a reasonable time — typically a few days for intimate content — escalate to a formal legal demand. A cease and desist letter from an attorney, or a DMCA takedown notice if your own image is being used without authorization, often produces faster results than a standard report.
The Legal Landscape by State
More than 20 states have enacted laws specifically addressing nonconsensual deepfake intimate imagery, and the coverage varies. Most of these statutes criminalize the creation and distribution of nonconsensual AI-generated intimate images and provide civil remedies allowing victims to sue for damages.
California's law is among the broadest, covering both intimate imagery and deepfakes used for harassment or defamation of private individuals. It provides civil remedies including injunctive relief, actual damages, and up to $150,000 in punitive damages in egregious cases. Texas's law criminalizes nonconsensual intimate deepfakes and provides civil remedies. Florida, Illinois, and New York all have statutes addressing nonconsensual intimate imagery that courts have applied to AI-generated content, with varying scope and remedies.
For deepfakes that are defamatory rather than intimate, general defamation law applies in every state. The standard elements — a false statement of fact, publication to a third party, identification of the plaintiff, and resulting damage to reputation — apply to deepfake video just as they apply to written or verbal statements. Proving the statement is false is straightforward with deepfakes in a way it is not always with text, since the underlying reality of what actually occurred is usually not in dispute.
Several states have also enacted election-related deepfake laws that prohibit AI-generated content designed to mislead voters about candidates. These apply narrowly to electoral contexts and are unlikely to be relevant in most personal deepfake situations.
When to Involve Law Enforcement
Filing a police report is appropriate when the deepfake involves criminal conduct: threats, stalking, extortion, or nonconsensual intimate imagery in a state that criminalizes it. A police report creates an official record, may trigger an investigation, and is sometimes required before certain legal remedies — like a civil protection order — become available.
The practical reality is that law enforcement response to deepfake complaints varies considerably by jurisdiction. Large city departments and those with dedicated cybercrime units are more likely to investigate effectively. Smaller departments may not have the technical capability or resources to pursue the case actively. Filing the report is still worthwhile even if active investigation is unlikely, because the record itself has value.
The FBI's Internet Crime Complaint Center (IC3) accepts reports of online fraud and extortion, including sextortion schemes that use deepfakes as leverage. If the deepfake is being used to extort money or other concessions, IC3 is the appropriate federal reporting pathway.
Civil Legal Options
Civil litigation — suing the person who created or distributed the deepfake — is available but comes with practical considerations. To pursue a civil claim, you generally need to know who the defendant is, which requires identifying the creator or distributor. That may require subpoenaing platform records to identify an anonymous account, a process that requires filing a lawsuit and getting a court order before you can even identify who you are suing.
Once identified, the viable claims depending on jurisdiction and facts include: nonconsensual intimate imagery under state statute, defamation, intentional infliction of emotional distress, and right of publicity violations. Damages can include actual economic losses, emotional distress damages, and in states with statutes providing statutory damages, amounts set by law without requiring proof of specific harm. California's nonconsensual intimate imagery statute, for example, provides statutory damages regardless of whether the plaintiff can prove a specific dollar amount of harm.
An attorney specializing in internet law, privacy, or defamation can evaluate whether the facts support viable claims and whether the likely defendant has assets that would make a judgment collectible. Many of these cases settle once the defendant understands the legal exposure, particularly after a well-crafted cease and desist letter.
A Common Scenario
A woman in California discovers that an ex-partner has posted AI-generated intimate imagery of her face superimposed on another person's body on a public adult content site. She screenshots everything immediately, capturing the URL, username, and post date. She reports to the platform using its dedicated nonconsensual intimate imagery reporting form, citing the platform's policy explicitly. The platform removes the content within 48 hours. She files a police report documenting the incident. She consults an attorney who sends a cease and desist letter to the ex-partner's last known address, threatening civil action under California's nonconsensual intimate imagery statute for statutory damages and attorneys' fees. The ex-partner does not respond, but the content does not reappear. Her attorney advises her to monitor for reposts and preserve all documentation for potential civil action if the behavior continues.
Frequently Asked Questions
Is it illegal to create a deepfake of someone without their consent?
It depends on what the deepfake depicts and how it is used. Creating AI-generated intimate imagery of a real person without their consent is illegal in more than 20 states. Using a deepfake to defame, harass, stalk, or extort someone is illegal under existing criminal statutes in every state. Creating a deepfake for obvious satire or parody on a matter of public concern occupies more protected territory, though it can still cross into defamation if it presents false statements of fact convincingly. The technology itself is not regulated; the use and harm are what trigger legal liability.
Can I get a deepfake removed from Google search results?
Google has a removal request process specifically for nonconsensual fake intimate imagery in search results. You can submit a request through Google's Search removals tool identifying the URLs you want removed. Google has expanded this process in recent years and treats nonconsensual intimate imagery, including AI-generated content, as a priority removal category. Removal from search results does not remove the underlying content from the site hosting it — that requires a separate report to the platform — but it does significantly reduce discoverability. For other types of defamatory deepfake content, a legal demand citing defamation and harm to reputation may be required.
What if the deepfake was posted anonymously and I don't know who created it?
Identifying an anonymous creator requires a subpoena served on the platform to obtain account registration information, IP address logs, and any other identifying data the platform holds. This typically requires filing a civil lawsuit first to obtain the court's authority to issue the subpoena. The process takes time and does not always produce useful information if the creator used a VPN or created accounts with false information. Despite these limitations, identification is sometimes possible, particularly when the creator made mistakes — using a recognizable username, posting from a traceable device, or engaging with the content in ways that reveal identifying details.
Does reporting a deepfake to a platform protect me legally?
Reporting to a platform is not a substitute for legal protections — it is a separate, practical step. Platform reports can get content removed quickly without litigation. Legal claims against the creator remain available regardless of whether you reported to the platform. The two tracks operate independently. However, your documentation of the report and the platform's response — including any refusal to act — can be relevant evidence in subsequent legal proceedings, particularly if you are arguing that the platform had notice of the harmful content.
What is the DEFIANCE Act and does it help?
The DEFIANCE Act, signed into law in 2024, created a federal civil cause of action for victims of nonconsensual intimate visual depictions, including AI-generated deepfakes. It allows victims to sue in federal court for damages, including actual damages and up to $150,000 per violation in cases of malicious conduct, without needing to prove economic harm. It also provides a cause of action against those who distribute the content, not just the original creator. The act applies to intimate visual depictions shared without consent where the depicted person is identifiable. It is a significant expansion of federal remedies and complements state laws rather than replacing them.