
Only a few years ago, AI-generated videos were introduced on social media. Things like warped mouth movements and mangled hands made it clear these videos were computer-generated. Today, the “tells” that indicate a video is AI are much more subtle. We’ve entered an era in which fabricated evidence: video, audio, and images generated by AI, are virtually indistinguishable from their real-world counterparts.
For lawyers, the use of AI-generated evidence, in particular, has become a genuine topic of conversation. Encountering AI-generated materials in litigation requires careful scrutiny and an understanding of evolving courtroom standards.
First, though, legal teams need to pair any use of evidence that leverages artificial intelligence with a clear understanding of how courts assess authenticity and admissibility. Whether it appears in opposing counsel’s exhibits or your own, litigation teams must be ready to evaluate, challenge, and respond to it effectively.
Legal AI technology is being used for a range of administrative, analytical, and business workflows throughout firms, none of which raise evidence admissibility concerns.
However, courts are increasingly encountering AI-created materials submitted as evidentiary exhibits — with over 1,314 documented cases of AI-generated content submitted in court filings and more than 80% of U.S. cases now hinging on some form of video or digital evidence1 — including:

If the proposed Federal Rule of Evidence 707, Machine-Generated Evidence, is passed by all of the committee hoops it has yet to jump through, it will provide new guidance on the admissibility and authentication requirements for the use of AI in relation to evidence.2
In the meantime, there are two primary tests to pass:
The general directive of Rule 901, Authenticating or Identifying Evidence, is pretty clear-cut: Every item of evidence submitted needs to be authenticated or identified with its own “evidence sufficient to support a finding that the item is what the proponent claims it is.”3
Rule 901 goes on to detail 10 examples of acceptable authentication but notes that they don’t constitute a complete list. And, since the rule hasn’t been amended since 2011, none of the examples are specific to AI-generated or AI-altered materials.
To strengthen evidentiary foundations and authenticity in accordance with Rule 901, make sure to prioritize:
The Daubert Standard grants judges the responsibility to act as gatekeepers of scientific (and sometimes non-scientific) evidence and expert testimony, in part to assess the viability of novel (or “junk”) science before it’s laid out for a layperson jury.
When it comes to AI evidence, there is potential concern over admissibility when drilling down to specific AI products vs. a general understanding of the concepts behind AI. That is, if a specific AI methodology is preserved as a trade secret by its developers, the lack of transparency to a larger scientific community may put that AI-enhanced tool at risk of being judged inauthentic and inadmissible.4
When considering tools, vendors, and solutions, vet them in terms of their understanding of and record on evidence authentication, as well as the reliability and transparency of their AI methodologies. You may need to be able to explain how a particular algorithm works in the courtroom as part of passing a Daubert challenge.
The rise of AI-generated content in the courtroom can pose serious risks to your case outcomes. Deepfakes, synthetic audio, and AI-drafted documents can lead to:
As AI-generated evidence appears more frequently in litigation, attorneys need clear protocols for handling it on both sides of a case. To protect your case outcomes, make sure your team follows these key practices:
Additionally, keep an eye out for emerging challenges and case rulings that provide more direction on safeguarding the authentication of AI-influenced evidence.
AI-assisted analysis tools are accepted and useful parts of case preparation, but fully AI-generated evidentiary materials are increasingly appearing in courtrooms and encountering challenges.
As AI-generated evidence becomes more common, attorneys must combine technological awareness with procedural precision when handling AI-sourced or -influenced exhibits in court.
In addition to a wide range of litigation support services, U.S. Legal Support provides trial graphics and demonstratives that help legal teams manage complex digital evidence with confidence. To ensure best practices and protect your case outcomes, reach out today to learn more about what a partnership could look like.
Sources:
Content published on the U.S. Legal Support blog is reviewed by professionals in the legal and litigation support services field to help ensure accurate information. The information provided in this blog is for informational purposes only and should not be construed as legal advice for attorneys or clients.