Skip to Content
Menu Toggle
AI Evidence in the Courtroom: What Businesses in Tech, Healthcare, and Beyond Need to Know
subscribe to legal alerts

subscribe to our blogs

sign up now

Media Contacts

Charles B. Jimerson
Managing Partner

Jimerson Birr welcomes inquiries from the media and do our best to respond to deadlines. If you are interested in speaking to a Jimerson Birr lawyer or want general information about the firm, our practice areas, lawyers, publications, or events, please contact us via email or telephone for assistance at (904) 389-0050.

AI Evidence in the Courtroom: What Businesses in Tech, Healthcare, and Beyond Need to Know

July 16, 2025 Communications & Media Industry Legal Blog, Education Industry Legal Blog, Healthcare Industry Legal Blog, Insurance Industry Legal Blog, Manufacturing & Distribution Industry Legal Blog, Professional Services Industry Legal Blog, Technology Industry Legal Blog

Reading Time: 5 minutes


Artificial intelligence (AI) is transforming the way we do business, from streamlining insurance claims and enhancing diagnostics to detecting fraud and analyzing supply chain data. But as AI enters the courtroom, one critical question looms: Can we trust machine-generated evidence?

Federal courts are one step closer to answering that question. On May 2, 2025, the U.S. Judicial Conference’s Advisory Committee voted in favor of Proposed Rule 707, a new rule designed to address the reliability of AI-generated evidence in litigation. The implications for businesses are wide-reaching.

What Is Rule 707?

Proposed Rule 707 would apply the same standards used for human expert witnesses to AI-generated evidence presented without a human expert. That means:

  • The AI must rely on reliable data
  • Its methods must be scientifically sound
  • The output must be accurate and relevant to the case

If your business uses AI systems to analyze information or generate reports that could end up in a courtroom, this rule could significantly affect how you prepare for legal disputes. The Committee emphasized that Rule 707 is not meant to encourage replacing human experts with machines. Instead, it ensures that machine-generated evidence meets the same scrutiny as live testimony.

Who Does this Impact?

Rule 707 has implications for a wide range of industries already leveraging AI:

  1. Technology & Software

Companies building or using AI-driven platforms, especially in analytics, SaaS, or image processing, should be prepared for courts to request validation, transparency, and even access to algorithms.

  1. Healthcare

Hospitals and medical tech companies using AI for diagnostics or treatment predictions may see increased scrutiny in malpractice or reimbursement litigation.

  1. Insurance

AI used to estimate losses, flag fraudulent claims, or assess risk profiles could be challenged in bad-faith or coverage litigation.

  1. Communications & Media

Deepfake technology, AI video enhancement, and content generation tools could face admissibility hurdles in IP, defamation, or privacy disputes.

  1. Education

EdTech tools using AI for student assessments, behavior prediction, or admissions screening may need to demonstrate fairness and reliability.

  1. Manufacturing

Manufacturers relying on AI for quality control, defect analysis, or process optimization may need to defend the validity of those systems in litigation.

  1. Professional Services

Consulting firms, accountants, and legal providers using AI-driven reports or analyses in client deliverables may face scrutiny if those results are used in court.

  1. Expert Witnesses

Expert witnesses hired to review evidence in a civil or criminal case who utilize AI should be prepared to defend the authorship, reasoning, methodologies, and conclusions in their reports. 

Protecting Confidential Information in the Age of AI

Any business handling private, sensitive, personal, protected, privileged, or otherwise confidential information should exercise caution when using OpenAI platforms. Data entered into these platforms is transmitted externally and may no longer remain confidential. For organizations dealing with such information, it is strongly recommended to consider using closed or locally hosted AI solutions that do not expose data to public or open systems.

The potential consequences of disclosing confidential information—whether accidental or intentional—can be severe, impacting clients or customers both civilly and criminally. Moreover, many business insurance policies may not cover liabilities arising from such disclosures. To mitigate these risks, businesses should take proactive measures: update employee handbooks, revise internal policies and procedures, and amend contracts with vendors and third parties.

Employee training is essential to ensure responsible AI use tailored to your industry, and your partners, contractors, and vendors should be held to the same standard of caution.

What the Courts Are Saying

Recent cases already hint at how courts are thinking about AI evidence:

  • A New York judge rejected ChatGPT-generated billing rates, citing the tool’s unreliability.
  • A Washington state judge excluded an AI-enhanced video when the expert couldn’t explain the model’s training data.
  • A Florida judge donned a VR headset to review a crime scene recreation—but emphasized the importance of proving the tech’s reliability before a jury sees it.
  • Florida courts are beginning to require affirmative disclosures on any filing that was generated, in whole or in part, by AI. Notably, Broward County, Florida, in the Seventeenth Circuit is already requiring this disclosure. 

Even though Rule 707 isn’t final yet, judges are already applying strict scrutiny to machine-generated evidence.

Rule 707: Not Without Critics

While the Advisory Committee voted 8–1 in favor of Rule 707, not everyone agreed. The Department of Justice dissented, arguing that Rule 702 already provides enough protection and that Rule 707 tries to regulate problems that haven’t fully materialized yet. 

Others raised concerns about how courts will actually apply the rule, especially with AI advancing faster than the legal system can respond. That’s why the proposed rule will likely go through a period of public comment, giving businesses, technologists, and legal professionals a chance to weigh in.

How Your Business Can Prepare Now

Even before Rule 707 becomes law, smart businesses are taking action:

  • Vet Your AI Tools: Know what data is going in, how it’s being processed, and what comes out. Transparency is key.
  • Keep Good Records: Document the development and use of AI systems—especially inputs, outputs, and changes over time.
  • Anticipate Disclosure: Be ready to share technical details (within reason) if a court asks. This might include validation studies, error rates, or access for independent testing.
  • Authenticate Multimedia Evidence: If your case involves video, audio, or image evidence, have a process to confirm its authenticity, especially if it was enhanced by AI

Bottom Line

AI-generated evidence is no longer hypothetical—it’s already showing up in litigation. Courts are beginning to draw boundaries, and Rule 707 is one of the most significant steps yet. Whether you’re building the AI, using it to drive business decisions, or responding to claims that rely on it, understanding this rule now could save major headaches later.

Have questions about how your company’s AI use might be viewed under evolving legal standards?Contact the Litigation Team at Jimerson Birr, P.A. We are here to help you stay ahead of the curve.

we’re here to help

Contact Us

CONTACT US
Jimerson Birr