Legal Clarity Blog

Expert insights on legal document analysis and understanding complex legal terms

If Someone Makes a Deepfake of You: What You Can Do (Step-by-Step)

Dec 13, 2025 5 min read 71 views
Share this article:

Meta: A comprehensive, step-by-step, plain-language guide for people targeted by AI-generated deepfakes, impersonation videos, or voice clones. Learn how to collect evidence, report abuse, and understand your rights under evolving U.S. laws. Informational only — not legal advice.

If Someone Makes a Deepfake of You: What You Can Do (Step-by-Step)

Discovering that someone has created a fake image, video, or voice of you can be deeply distressing. These AI-generated “deepfakes” often look and sound convincing — and they can be used for harassment, blackmail, or spreading false information. The experience can feel invasive and humiliating, but you are not powerless. Laws, reporting tools, and new policies are improving rapidly to help victims regain control.

This guide walks you through practical, step-by-step actions you can take to protect yourself, preserve your rights, and respond effectively if someone makes or shares a deepfake of you.

Step 1: Preserve Evidence Immediately

Before contacting anyone or trying to get the content removed, your first move should be to gather and preserve evidence. Online content can disappear quickly — and if you decide to report it or take legal action, documentation is critical.

Here’s what to do:

  • Download the file: Save a copy of the image, video, or audio clip if the platform allows it.
  • Take screenshots: Capture the web address (URL), username, profile, and visible timestamps.
  • Record context: Note the date, time, and where you found it (social platform, private message, website, etc.).
  • Preserve messages: If someone sent or shared the content directly with you, save those messages or emails too.

Tip: If you’re uncomfortable viewing or saving the material yourself, ask a trusted friend or digital safety advocate to help document it.

Step 2: Report the Deepfake to the Platform

Most major platforms — including Instagram, TikTok, X (Twitter), YouTube, and Reddit — have policies that prohibit impersonation, non-consensual sexual content, and synthetic media used to deceive others. When reporting, select options related to impersonation, harassment, or privacy violation.

When submitting a report, provide:

  • Links to the offending content
  • Screenshots and timestamps
  • A brief explanation that the content is an AI-generated deepfake created without your consent

Some platforms now require users to label AI-generated content. If the creator failed to do so, that’s an additional violation that strengthens your case for removal.

Step 3: Use Takedown and Privacy Laws

Depending on your location and the nature of the deepfake, you may have legal rights under several types of laws. These vary by state, but the most common include:

  • Privacy or “Right of Publicity” Laws: Protect your name, image, and likeness from being used without consent.
  • Non-Consensual Intimate Image (NCII) Laws: Most states criminalize or provide civil remedies for fake or real intimate images shared without consent — including AI-generated versions.
  • Harassment and Cyberstalking Laws: If the deepfake is part of a pattern of abuse or threats, those laws may apply too.
  • Copyright Law (Limited Use): In some cases, if you originally created or appear in a work, you may claim copyright interest to request takedowns under the DMCA (Digital Millennium Copyright Act).

Recent state laws in California, Texas, and New York specifically target deepfakes made to harm reputations or interfere with elections. These laws often require online platforms to act quickly once a victim reports non-consensual synthetic media.

You can also file a request directly with search engines like Google to remove deepfake results from search listings under their Personal Information Removal policy.

Step 4: Consider Legal Help if Harm Is Serious

If the deepfake causes significant harm — such as job loss, reputational damage, or threats to your safety — consulting a lawyer can make a major difference. You don’t need to file a lawsuit to benefit from legal advice; a short consultation can help you understand your rights and next steps.

Options may include:

  • Sending a cease and desist letter to the creator or platform.
  • Filing a civil claim for invasion of privacy, defamation, or emotional distress.
  • Working with law enforcement if the content involves threats, blackmail, or sexual exploitation.

Free or low-cost legal help may be available through digital rights nonprofits, local bar associations, or legal aid programs. Organizations like the Cyber Civil Rights Initiative and the VictimConnect Resource Center can guide you confidentially.

Step 5: Protect Yourself Going Forward

Once the immediate crisis has passed, it’s important to reduce your risk of future incidents and maintain your peace of mind.

  • Lock down privacy settings: Make personal social media accounts private and review who can tag or share your content.
  • Search for your image periodically: Tools like Google Images’ “Search by Image” or Pimeyes can help you find unauthorized uses of your likeness.
  • Document harassment patterns: If this wasn’t the first incident, track dates and usernames — patterns matter legally.
  • Use watermarks or content controls: Creators can deter misuse by subtly marking images or using metadata tools that detect manipulation.

It’s also worth educating friends and family about deepfakes — especially how to identify and report them. Public awareness is one of the most powerful tools against digital impersonation.

Step 6: Take Care of Your Mental Health

Being targeted by a deepfake can be emotionally devastating. You might feel violated, angry, or powerless. These reactions are completely normal. Consider reaching out to trusted people or professional resources for emotional support.

Confidential hotlines and advocacy organizations can help, such as:

Remember: You did nothing wrong. The blame lies entirely with the person who created or distributed the fake. The law is catching up, and resources for victims are expanding every year.

Step 7: Stay Informed About Evolving Laws

Deepfake laws are developing quickly. Over a dozen states now explicitly outlaw malicious AI impersonation, and Congress is considering national legislation to criminalize non-consensual synthetic media. The White House’s AI Bill of Rights and FTC enforcement actions signal growing protection for victims.

Staying informed — and sharing that knowledge — can help protect others. Awareness is not just power; it’s prevention.

Need Help to Understand Your Legal Documents?

Don't let complex legal language confuse you. Upload your documents and get clear, easy-to-understand summaries in minutes.

Get Started

Subscribe to Our Newsletter

Get the latest legal insights, expert analysis, and helpful tips delivered directly to your inbox.

You Might Also Like

Trending Now

Most Popular

Latest Articles