AI systems are already making or influencing decisions that affect your job prospects, your creditworthiness, your insurance rates, and your online identity. Most of these decisions happen without your knowledge, often without meaningful human review, and frequently without clear legal accountability. The law is catching up — but unevenly, and with significant gaps. Here is where things actually stand.
AI in Hiring: What Employers Can and Cannot Do
Automated hiring tools are now common. Employers use AI to screen resumes, rank candidates by predicted job fit, analyze video interviews for voice patterns and facial expressions, and monitor employee performance. These tools can process far more applications than a human recruiter and do so quickly. They also introduce specific legal risks that regulators are beginning to address.
Federal anti-discrimination law — Title VII, the Age Discrimination in Employment Act, the Americans with Disabilities Act — applies to AI-based hiring decisions just as it applies to human ones. If an AI screening tool produces disparate impact against a protected class, the employer faces the same liability exposure as if a human recruiter had done the same thing. The fact that an algorithm made the decision does not insulate the employer from discrimination claims. The EEOC has issued guidance making this explicit and has pursued enforcement actions involving AI hiring tools.
New York City enacted Local Law 144, which requires employers using automated employment decision tools to conduct annual bias audits and disclose their use of such tools to job candidates. Illinois passed the Artificial Intelligence Video Interview Act, requiring employers to notify applicants when AI analyzes video interviews and to explain how the AI works. Several other states have similar legislation pending. California is developing broader AI employment regulations through its Civil Rights Department.
If you applied for a job and were rejected, you generally have no right to know whether AI was involved in the decision or what factors it weighted. Where disclosure laws like New York's apply, you at least know the tool was used. Whether you can challenge the outcome depends on whether you can show disparate impact or another cognizable discrimination theory — which typically requires data that individual applicants do not have easy access to.
Deepfakes, Synthetic Media, and Your Identity
AI can now generate realistic images, video, and audio of real people doing and saying things they never did or said. The technology is accessible, fast, and increasingly difficult to detect. The legal framework for addressing it is a patchwork of state statutes, existing tort law, and platform policies.
Nonconsensual intimate imagery generated by AI — often called AI-generated CSAM or deepfake pornography — is the most legally developed area. More than 20 states have enacted laws specifically criminalizing nonconsensual deepfake pornography. California, Texas, Illinois, New York, and Florida are among them. Several of these statutes also provide civil remedies, allowing victims to sue for damages. Federal legislation addressing this has been proposed repeatedly and may advance in 2026.
Voice cloning and impersonation raise distinct issues. AI can replicate a person's voice from a short audio sample with enough fidelity to deceive family members and financial institutions. The FTC has issued warnings about AI voice cloning used in fraud schemes. Several states have enacted laws addressing voice and likeness rights that can apply to unauthorized AI cloning, though coverage varies considerably.
For public figures, the right of publicity — a legal right to control commercial use of one's name, image, and likeness — provides some protection against unauthorized AI-generated content used commercially. For private individuals, existing defamation, harassment, and identity theft laws may apply depending on how the synthetic content is used and what harm it causes. None of these frameworks were designed for AI-generated synthetic media, and gaps are significant.
AI in Consumer Decisions: Credit, Insurance, and Housing
AI-driven scoring and decision-making is embedded in financial services in ways most consumers do not see. Credit decisions, insurance pricing, and tenant screening all increasingly rely on algorithmic models that go well beyond traditional credit scores.
The Fair Credit Reporting Act requires that when an adverse action is taken based on a consumer report — a loan denial, a higher interest rate, a rental rejection — the consumer must be notified and provided the specific reasons for the decision. This requirement applies when AI-driven reports from consumer reporting agencies are used. It does not apply to proprietary models that lenders or insurers build and use internally, which is where much of the AI innovation in financial services is happening.
The Equal Credit Opportunity Act prohibits discrimination in credit on the basis of race, color, religion, national origin, sex, marital status, or age. It applies to AI-driven credit decisions. A model that uses zip code as a proxy variable may produce racially discriminatory outcomes even if race is not explicitly in the model. Regulators at the CFPB have signaled increasing scrutiny of algorithmic lending models for fair lending compliance.
Insurance pricing using AI-driven models is regulated at the state level, and the degree of oversight varies dramatically by state. Some states require actuarial justification for rating factors; others are more permissive. The use of non-traditional data sources in insurance pricing — social media behavior, purchasing history, telematics — raises fairness and discrimination concerns that insurance regulators are beginning to examine.
Intellectual Property and AI-Generated Content
AI systems trained on existing human-created work produce outputs that raise unresolved questions about copyright ownership and infringement. The legal landscape is developing rapidly through both litigation and regulatory guidance.
The Copyright Office has issued guidance stating that AI-generated content with no human authorship is not eligible for copyright protection. Work that involves meaningful human creative input — selecting, arranging, or modifying AI outputs — may be eligible for copyright in the human-authored portions. The line between sufficient and insufficient human creative contribution is not clearly defined and will be worked out through litigation over time.
Whether training AI models on copyrighted works without permission constitutes infringement is being litigated actively. Major lawsuits brought by news organizations, visual artists, and authors against AI companies are proceeding through federal courts. The outcomes will significantly affect both the AI industry and the rights of creators whose work was used in training data. No definitive ruling has issued as of early 2026; these cases are expected to produce significant precedent within the next few years.
Liability When AI Causes Harm
When an AI system causes harm, determining who is legally responsible is genuinely complicated. The developer who built the model, the company that deployed it, the business that used the output, and in some framings the data that trained it are all potential defendants depending on the theory of liability.
Product liability law is one avenue. If an AI system is defective in design or fails to warn users of known risks, product liability principles may apply. Courts are still developing the framework for applying product liability doctrine to AI outputs, which are not physical products and whose "defects" are often statistical rather than discrete.
Negligence is another theory. A company that deploys an AI system without adequate testing, monitoring, or human oversight may be negligent when the system causes foreseeable harm. The question of what level of oversight is adequate is one courts will be answering for years.
Section 230 of the Communications Decency Act, which generally immunizes online platforms from liability for third-party content, has been argued to shield AI-generated content in some contexts. Courts have not uniformly accepted this argument, and the application of Section 230 to AI-generated outputs remains contested.
A Common Scenario
A 54-year-old marketing manager applies for 60 positions over six months using her real resume and receives almost no responses. A younger colleague with similar qualifications applied to comparable positions and received significantly more callbacks. She suspects the AI screening tools used by employers may be filtering her out based on graduation year, which correlates with age. Under the Age Discrimination in Employment Act, she would need to show that the screening tool produces a disparate impact on applicants over 40. Without access to the employer's data or the algorithm's weighting, building that case is difficult. If she is in New York City, she can at least confirm whether the employer used a covered automated employment decision tool and whether it was audited. She cannot easily obtain the audit results or the model's specific parameters. This is the gap between the legal protection that exists on paper and the practical ability to enforce it.
Frequently Asked Questions
Can I find out if AI was used to reject my job application?
In most states, no — there is no general legal requirement that employers disclose AI use in hiring decisions. New York City's Local Law 144 requires disclosure for employers using covered automated employment decision tools in hiring or promotion decisions affecting New York City residents. Illinois requires disclosure specifically for AI video interview analysis. Outside of these and similar state or local laws, employers have no obligation to tell applicants that AI was involved in screening or rejection decisions.
Is AI-generated content protected by copyright?
Content generated entirely by AI with no meaningful human creative contribution is not eligible for copyright protection under current U.S. Copyright Office guidance. The copyright system protects human authorship. If a person makes meaningful creative choices in selecting, arranging, or modifying AI outputs, those human-authored elements may be protectable. The boundary between protectable and unprotectable AI-assisted work is not yet precisely defined and will be clarified through future guidance and litigation.
What can I do if AI generated a fake image or video of me?
Your options depend on your state and the nature of the content. If the content is nonconsensual intimate imagery, more than 20 states have laws that may provide criminal penalties and civil remedies. If the content is defamatory, existing defamation law may apply. If it is used commercially without your permission, right of publicity law may provide a claim. Most major platforms have policies against deepfake pornography and nonconsensual synthetic media and provide reporting mechanisms for takedown requests. Acting quickly matters because synthetic content can spread rapidly once published.
Can a lender legally use AI to deny my loan application?
Yes, with limits. Lenders can use AI-driven models in credit decisions, but federal fair lending laws still apply. A model that produces racially, gender, or age-based disparate outcomes may violate the Equal Credit Opportunity Act regardless of whether those characteristics were explicitly included in the model. If you receive an adverse action notice on a credit application, you have the right to know the specific reasons for the denial. You can request your consumer report and dispute inaccuracies under the Fair Credit Reporting Act. If you believe discrimination was involved, you can file a complaint with the CFPB or the relevant federal banking regulator.
Who is legally responsible when an AI system causes harm?
It depends on the type of harm, the type of AI system, and the applicable legal theory. Potential defendants include the company that developed the model, the company that deployed it in a product or service, and the business that used the AI output to make a consequential decision. Courts are still developing doctrine on this question. Product liability, negligence, and consumer protection law are all potential frameworks. For now, the clearest accountability tends to be at the deployment layer — the business that chose to use an AI tool in a consequential context has the most direct relationship with the affected person and the most control over how the output was used.