The United States has no comprehensive federal privacy law. What it has instead is a patchwork: sector-specific federal statutes, a growing body of state privacy laws, and a Federal Trade Commission that uses its authority over unfair and deceptive practices to fill some of the gaps. For most people, the privacy protections that actually apply to their data depend heavily on where they live and what kind of data is involved. Understanding that patchwork is the starting point for understanding what the law really protects.
What Counts as Personal Data
Personal data, in the broadest sense, means any information that can identify you or be linked back to you. The definition matters because most privacy laws use it to determine what is covered. Narrow definitions create gaps that companies exploit; broad definitions create compliance burdens that affect even low-risk data practices.
The obvious categories are name, email address, phone number, and government identification numbers. But modern data collection goes considerably further. Location data tracked through a smartphone can reveal where you live, where you work, what medical facilities you visit, and what religious institutions you attend. Browsing history reveals interests, health concerns, political views, and purchasing intent. Device identifiers allow companies to recognize you across websites even when you have not logged in. Inferred data — conclusions an algorithm draws about you based on behavioral signals — may say more about you than the raw data it was derived from.
This last category is particularly important in 2026. AI systems are capable of inferring sensitive characteristics, including health conditions, sexual orientation, political affiliation, and financial stress, from data that appears innocuous on its face. A privacy law that protects explicitly collected sensitive data but not inferred sensitive data leaves a significant gap that the technology has already moved to exploit.
Federal Privacy Law: Sector-Specific and Limited
Federal privacy law in the United States is organized by data type and industry rather than by a general right to privacy. Several federal statutes provide meaningful protection in their specific domains.
HIPAA protects health information held by covered entities: doctors, hospitals, health insurers, and their business associates. It does not cover health data held by fitness apps, direct-to-consumer genetic testing services, or wellness platforms — a significant gap as health data increasingly moves outside the traditional healthcare system.
The Gramm-Leach-Bliley Act requires financial institutions to disclose their data sharing practices and give consumers limited opt-out rights for sharing with third parties. It does not give consumers the right to access or delete their financial data.
COPPA, the Children's Online Privacy Protection Act, restricts collection of personal data from children under 13 without verifiable parental consent. It applies to websites and online services directed at children or with actual knowledge that they are collecting data from children. Enforcement has increased in recent years, particularly for platforms with significant underage user bases.
The FTC Act's prohibition on unfair or deceptive practices applies broadly and has been used to pursue companies that misrepresent their data practices in privacy policies or fail to implement reasonable security. FTC enforcement actions are one of the more active areas of federal privacy activity, though the agency's remedial authority has limits.
State Privacy Laws: A Growing and Uneven Landscape
State privacy legislation has moved significantly faster than federal law over the past several years, producing a patchwork of protections that vary considerably in scope, strength, and the rights they grant.
California has the most comprehensive framework, built on two statutes. The California Consumer Privacy Act (CCPA), as amended by the California Privacy Rights Act (CPRA), gives California residents the right to know what personal information businesses collect about them, the right to delete it, the right to correct it, the right to opt out of its sale or sharing, and the right to limit the use of sensitive personal information. The California Privacy Protection Agency, created by CPRA, has independent enforcement authority. Businesses that meet certain size or data volume thresholds and serve California consumers must comply regardless of where the business is located.
Texas enacted the Texas Data Privacy and Security Act, effective July 2024, which grants Texas residents rights to access, delete, correct, and opt out of the sale or processing of personal data for targeted advertising. It applies to businesses that process data of at least 100,000 Texas consumers annually, with lower thresholds for businesses that derive revenue from selling data.
Illinois has the Biometric Information Privacy Act (BIPA), one of the most powerful privacy statutes in the country in its specific domain. BIPA requires informed written consent before collecting biometric identifiers such as fingerprints, facial geometry, and retinal scans, and provides a private right of action with statutory damages of $1,000 to $5,000 per violation. It has generated substantial litigation against employers using biometric timekeeping systems and companies using facial recognition technology.
New York has the SHIELD Act, which strengthened data breach notification requirements, and has proposed but not yet enacted broader comprehensive privacy legislation. Florida, Virginia, Colorado, Connecticut, and more than a dozen other states have enacted varying forms of comprehensive privacy law. The rights granted and the businesses covered differ enough across states that compliance for national businesses requires careful state-by-state analysis.
How AI Creates New Privacy Risks
AI systems interact with personal data in ways that existing privacy frameworks were not designed to address. Several specific dynamics create risks that go beyond traditional data collection and sharing concerns.
Scale and combination are the first. AI systems can combine data from multiple sources and identify patterns across datasets that would be impossible to analyze manually. Data that seems harmless in isolation becomes sensitive in combination. Location pings from a smartphone, purchase history from a retailer, and browsing behavior from a data broker can be combined to build a profile that reveals sensitive facts the person never disclosed to anyone.
Training data is the second. When personal data is used to train AI models, it may be embedded in the model in ways that are not easily separable or deletable. A person who exercises their right to delete their data from a company's database may have no practical way to remove the influence of that data from a model already trained on it. Whether training constitutes "processing" subject to deletion rights is a live legal question in multiple jurisdictions.
Automated decision-making is the third. AI systems increasingly make or influence consequential decisions: credit approvals, insurance pricing, job application screening, content moderation, and law enforcement targeting. These decisions may rely on inferred characteristics the subject never disclosed. Several state privacy laws include rights related to automated decision-making, but the scope of those rights and the transparency they require varies considerably.
What Rights You Actually Have Right Now
The rights available to you depend on your state of residence and the type of data involved. If you live in California, Texas, Colorado, Virginia, or another state with a comprehensive privacy law, you likely have some combination of the following rights against covered businesses:
- The right to know what personal data a business has collected about you and how it is being used
- The right to access a copy of your personal data
- The right to request deletion of your personal data, subject to exceptions for legal obligations and other legitimate purposes
- The right to correct inaccurate personal data
- The right to opt out of the sale of your personal data to third parties
- The right to opt out of targeted advertising based on your personal data
- The right to limit the use of sensitive personal information beyond what is necessary for the service
Exercising these rights requires submitting a verifiable request to the business. Most covered businesses are required to respond within 45 days and to provide a mechanism for submitting requests, typically through a "Do Not Sell or Share My Personal Information" link or a dedicated privacy request form.
If you live in a state without a comprehensive privacy law, your rights under state law are more limited. Federal rights under sector-specific statutes apply based on the type of data, not your location. HIPAA rights apply to health data held by covered entities regardless of state. BIPA rights apply if you are in Illinois. FTC enforcement protects against deceptive practices broadly.
A Common Scenario
A California resident uses a fitness app that tracks her daily location, heart rate, sleep patterns, and menstrual cycle. She later learns the app shared her data with a third-party analytics company. Under CCPA and CPRA, she has the right to know what data the app collected, request a copy of it, ask that it be deleted, and opt out of its sharing with third parties. She submits a deletion request through the app's privacy portal. The app must respond within 45 days and comply unless an exception applies. The analytics company that received her data before the opt-out may have its own obligations depending on its relationship with the app. Whether the data already incorporated into any AI model the analytics company operates can be effectively deleted is a question her state's law does not yet fully answer.
What Privacy Laws Do Not Cover
Privacy laws have real limits, and understanding them helps set realistic expectations. Public information is generally not protected — data about you that is publicly available or that you voluntarily made public is typically outside the scope of privacy statutes. First Amendment considerations also limit how aggressively laws can restrict the collection and sharing of certain categories of information.
Small businesses often fall below the thresholds that trigger compliance obligations under state privacy laws. A company that processes data of fewer than 100,000 consumers annually may not be covered by Texas or Colorado law even if its data practices are aggressive. California's thresholds are somewhat different and may capture more businesses.
Data collected outside the jurisdiction's reach is not subject to that jurisdiction's rules. A company with no California nexus and no California consumers is not subject to CCPA regardless of its data practices. For Americans whose data is collected by foreign companies, enforcement options are limited.
Inferred data remains an area of significant legal uncertainty. If a company does not collect your health status but infers it from your purchasing behavior, whether that inference is subject to the same protections as directly collected health data depends on how the applicable law defines sensitive information and whether the inference falls within that definition.
Practical Steps to Protect Your Privacy
Legal rights are only useful if you exercise them. For most people, a few practical steps reduce privacy risk more effectively than trying to navigate the full complexity of applicable law.
Review app permissions periodically. Location access, microphone access, and contact list access granted to an app you no longer use continues until you revoke it. Most smartphones make it straightforward to audit which apps have which permissions. Limiting permissions to what an app actually needs to function is one of the more impactful steps available.
Opt out of data sales and sharing where the option is available. Most major platforms and data brokers offer opt-out mechanisms, though finding them requires effort. California residents have the most enforceable opt-out rights, but opt-out options exist for residents of other states too as businesses build national compliance programs.
Read data breach notifications carefully. If a company notifies you of a breach involving your data, the notice will typically identify what type of data was exposed and what steps you can take. Acting on breach notifications promptly — placing a credit freeze, changing passwords, monitoring for unusual activity — reduces the risk of downstream harm from exposed data.
Frequently Asked Questions
Does the United States have a federal privacy law that covers all personal data?
No. The United States has no comprehensive federal privacy law covering personal data broadly. What exists is a collection of sector-specific statutes covering health data, financial data, children's data, and specific industries, plus FTC enforcement authority over deceptive practices. Comprehensive privacy rights for most personal data depend on state law, which varies significantly by state. Federal comprehensive privacy legislation has been proposed repeatedly but has not passed as of early 2026.
Do I have the right to delete my personal data from a company's records?
It depends on your state of residence and whether the company is covered by your state's privacy law. California, Texas, Colorado, Virginia, and several other states grant deletion rights to residents against covered businesses. Exceptions apply: businesses may retain data they are legally required to keep, data needed to complete a transaction, and data used for certain security purposes. If you live in a state without a comprehensive privacy law, deletion rights under state law are more limited, though sector-specific federal rights may apply depending on the type of data.
Can companies use my data to train AI models without my permission?
Under current law in most U.S. jurisdictions, companies can generally use data they have lawfully collected to train AI models if their privacy policy discloses that practice broadly enough to cover it. California's CPRA and some other state laws impose limits on using sensitive personal information for purposes beyond the primary transaction, which may restrict some AI training uses. Whether a right to delete includes the right to remove one's data from an already-trained model is an unresolved legal question that regulators and courts are beginning to address.
What is BIPA and why does it matter?
BIPA is the Illinois Biometric Information Privacy Act, enacted in 2008. It requires companies to obtain informed written consent before collecting biometric identifiers such as fingerprints, facial geometry, and voiceprints, and prohibits selling or profiting from biometric data. It provides a private right of action with statutory damages of $1,000 per negligent violation and $5,000 per intentional violation. It has generated significant litigation, with settlements running into the hundreds of millions of dollars against employers and technology companies. It remains one of the most powerful privacy statutes in the country in its specific domain and is a model that other states have considered but not widely replicated.
What should I do if a company refuses to honor my data deletion request?
If you live in a state with a privacy law that grants deletion rights and a covered business refuses a valid request without a lawful basis, you can file a complaint with your state's attorney general or privacy enforcement agency. California residents can complain to the California Privacy Protection Agency. Other states with privacy laws route complaints through the attorney general's office. Document your request and the company's response before filing. The company may owe you an explanation for a denial even if the denial is ultimately lawful.