In 2023, two attorneys filed a brief in federal court containing citations to cases that did not exist. An AI research tool had generated them, the lawyers had not verified them, and the judge was not amused. The incident became a widely reported warning about AI in legal practice. Since then, courts across the country have been grappling with a genuine question: how do you capture the efficiency benefits of AI tools without introducing new risks into a system where accuracy and fairness are foundational?
How AI Is Being Used in Courts and Legal Practice Today
AI is not making legal decisions. No court in the United States is using AI to determine guilt, decide motions, or issue rulings. What is happening is more incremental: AI tools are being used to assist with the volume and complexity of legal work in ways that are supposed to free up human judgment for the decisions that matter most.
Lawyers are using AI tools to research case law, summarize lengthy documents, draft initial versions of contracts and briefs, and analyze large document sets in discovery. Tasks that previously required hours of associate time can now be completed in minutes, at least in rough form. The output still requires human review, but the starting point is further along.
Courts themselves are experimenting with AI-assisted systems for scheduling, case management, and processing high volumes of routine filings. Some jurisdictions use algorithmic tools in criminal proceedings for risk assessment — evaluating the likelihood that a defendant will fail to appear for trial or reoffend — though these tools are separate from traditional AI assistants and have been in use for longer, with their own set of controversies.
E-discovery platforms have incorporated AI for years to identify relevant documents in large litigation matters. A case involving millions of emails might use AI to prioritize which documents attorneys review first, significantly reducing the time and cost of the discovery process. This use is well-established and generally accepted by courts, though questions about transparency and reliability remain.
Why Courts Are Moving Carefully
The legal system moves deliberately by design. Court decisions affect liberty, property, family relationships, and constitutional rights. The consequences of error are serious and sometimes irreversible. That institutional caution is not obstruction — it reflects an appropriate skepticism toward any tool that could introduce new failure modes into a high-stakes process.
The fabricated citations problem is the most visible concern. Large language models can generate plausible-sounding but entirely fictional case names, docket numbers, and holdings. The output looks authoritative. A lawyer under deadline pressure who does not verify each citation before filing can inadvertently present fabricated authority to a court. Several attorneys have already faced sanctions for exactly this. Courts have responded by emphasizing that AI use does not change a lawyer's professional responsibility to verify every factual and legal assertion in a filing.
Bias is a deeper and more systemic concern. AI systems learn from historical data. If the legal system's historical record reflects disparate treatment of particular groups — in bail decisions, sentencing, or case outcomes — AI tools trained on that data may perpetuate or amplify those disparities without any individual actor making a biased decision. The bias is embedded in the training data and reproduced at scale. Courts and researchers are actively examining whether risk assessment tools used in criminal proceedings produce racially disparate outcomes, with mixed findings across different tools and jurisdictions.
Explainability is a third concern. Many AI systems, particularly those based on large neural networks, cannot explain in human-comprehensible terms how they reached a particular output. In a legal system built on the principle that decisions must be reasoned and reviewable, a tool that produces recommendations without a traceable rationale poses a challenge to fundamental due process values.
AI and Legal Accuracy: The Verification Problem
The core accuracy problem with AI in legal work is that the tools are optimized to produce fluent, confident-sounding output — not necessarily correct output. A well-trained language model will generate a case citation in the same confident tone whether the case exists or not. It has no internal alarm that fires when it is fabricating. The plausibility of the output and its accuracy are entirely separate things, and the output is often plausible even when wrong.
This means that AI-assisted legal work requires a verification layer that takes time and expertise. Checking whether cited cases actually exist, whether they say what the AI claims they say, and whether they are still good law is not a quick task. In fields like law, where the authority of a source and its precise holding both matter, verification cannot be delegated to another AI tool. A lawyer using AI to research and draft must still read the primary sources.
Courts have begun requiring lawyers to certify that AI-generated content has been reviewed and verified. Some jurisdictions now require disclosure when AI tools were used in preparing filings. The goal is not to prevent AI use but to ensure that human professional responsibility remains clearly assigned and cannot be offloaded to a tool.
Risk Assessment Tools and Fairness Questions
A separate and longer-running debate involves algorithmic risk assessment tools used in criminal proceedings. Tools like COMPAS and PSA (Public Safety Assessment) are used in some jurisdictions to generate scores predicting the likelihood that a defendant will fail to appear for court or commit a new offense before trial. Judges may consider these scores when making bail and pretrial detention decisions.
These tools are controversial. A 2016 investigation found that one widely used tool produced racially disparate false positive rates — labeling Black defendants as higher risk at roughly twice the rate of white defendants who did not reoffend. Defenders of the tools argue that they reduce the influence of individual judicial bias and that their predictive accuracy overall is reasonable. Critics argue that historical data encoding systemic racial disparities cannot produce racially neutral predictions, and that defendants have a right to understand and challenge the basis for decisions affecting their liberty.
Courts are divided on how much transparency defendants are entitled to about these tools. Some courts have ruled that defendants do not have a constitutional right to examine the proprietary algorithms underlying risk scores. Others have required more disclosure. The legal framework around these tools continues to develop as both the technology and the litigation surrounding it evolve.
New Rules Emerging Across Jurisdictions
Courts at every level are developing guidance on AI use, and the landscape is changing quickly. The Judicial Conference of the United States has issued guidance for federal courts. Individual federal districts have adopted their own local rules. State courts in Texas, Illinois, California, and New York have all issued or are developing guidance for practitioners.
The most common requirements focus on disclosure and certification. Lawyers may be required to disclose whether AI tools were used in preparing a filing, certify that any AI-generated content was reviewed and verified by a human attorney, and confirm that the filing complies with all applicable professional responsibility rules regardless of how it was prepared. Some courts go further, requiring that AI-assisted work be specifically identified within the filing itself.
Professional responsibility rules are also evolving. State bar associations are issuing ethics opinions on AI use in legal practice, addressing questions like competence (do lawyers have a duty to understand the tools they use?), confidentiality (what happens to client data submitted to AI platforms?), and supervision (who is responsible for AI output reviewed by a junior attorney?). The ABA has issued formal guidance and continues to monitor developments.
A Common Scenario
A small business owner is involved in a commercial dispute. Her attorney uses an AI research tool to identify relevant precedents and drafts the initial brief using AI assistance. The attorney reviews and verifies the citations, corrects several that were inaccurate, and certifies the filing as required by the local court rule. The process took significantly less attorney time than a traditional research and drafting approach, which reduced the client's legal fees. The attorney's verification step caught the errors before they reached the court. The outcome is efficient and accurate — but only because the attorney treated the AI output as a starting point requiring review, not as a finished product.
What This Means If You Are Involved in a Legal Matter
For most people involved in litigation or legal proceedings, AI's impact is currently indirect. Your attorney may be using AI tools to prepare your case more efficiently. Court scheduling and document management systems may incorporate AI behind the scenes. If your case involves a large volume of documents in discovery, AI-assisted review may be used to process them.
You have the right to ask your attorney how technology is being used in your case and what review processes are in place. If a risk assessment tool is used in a criminal proceeding involving you, you may have rights to understand and challenge the score depending on the jurisdiction. As courts develop clearer rules around AI disclosure, the information available to parties about how AI is affecting their cases will likely increase.
Frequently Asked Questions
Can AI be used to decide court cases?
No. AI tools are not making judicial decisions in any U.S. court. Judges decide cases. AI may assist with research, document review, scheduling, or drafting, but the legal decision — the ruling on a motion, the verdict, the sentence — remains a human judgment. Courts have been explicit that AI is a supporting tool, not a decision-maker, and that human accountability for legal decisions is non-negotiable.
What happened with lawyers who used fake AI citations in court?
Several attorneys have faced sanctions and public reprimand for submitting court filings containing citations to cases that did not exist, generated by AI tools and not verified before filing. In some cases, courts imposed monetary sanctions and required remedial legal education. The cases established clearly that using an AI tool does not relieve an attorney of the professional responsibility to verify the accuracy of everything in a filing.
Are courts required to disclose when they use AI?
Requirements vary by jurisdiction and are still developing. Currently, most disclosure rules target attorneys filing documents, not courts using AI for administrative functions. As AI use by courts themselves becomes more widespread, transparency and disclosure requirements are likely to expand. Several legal scholars and advocacy organizations are pushing for courts to be subject to the same disclosure expectations they impose on lawyers.
Can AI bias affect my legal case?
Potentially, depending on the type of AI involved and how it is used. Risk assessment tools used in bail and sentencing decisions have been shown to produce racially disparate outcomes in some studies. AI used for document review or legal research introduces different risks — primarily accuracy rather than demographic bias. The best protection is an attorney who understands the tools being used, applies appropriate skepticism to AI output, and verifies results before relying on them.
What is a risk assessment tool and how is it used in court?
A risk assessment tool is an algorithm that generates a score predicting the likelihood that a defendant will fail to appear for trial or commit a new offense before their case is resolved. Judges in some jurisdictions may consider these scores when making bail and pretrial detention decisions. The tools are controversial because they have been shown to produce racially disparate results in some studies, and because defendants often have limited ability to examine or challenge the algorithmic basis for a score that affects their liberty.