x
AI

Stanford Study Finds AI Legal Tools Generate Hallucinations in One Out of Six Queries

Stanford Study Finds AI Legal Tools Generate Hallucinations in One Out of Six Queries
  • PublishedMay 30, 2024

According to a recent Stanford study, one out of every six queries using AI-powered legal tools results in hallucinations.

RegLab and the Institute for Human-Centered Artificial Intelligence (HAI) at Stanford University examined the efficacy of legal tools driven by AI from Thomson Reuters and LexisNexis.

Despite being marketed as customized legal AI tools, the researchers discovered that they generated inaccurate or erroneous information “an alarming amount of the time.”

The report stated that the main benefit of legal AI is its ability to expedite the laborious task of locating pertinent legal materials. Users may be mislead if a tool presents sources that appear reliable but are in fact irrelevant or contradicting. They can have too much faith in the tool’s results, which could result in incorrect court rulings and conclusions.

More than 200 questions including general research, jurisdiction-specific queries, and questions simulating a user with an incorrect legal understanding were posed to the AI tools.
It was also questioned about straightforward facts that don’t need to be interpreted legally.

Researchers discovered that the AI legal tools from Thomson Reuters and LexisNexis produced hallucinatory results more than 17% of the timeā€”one in every six searches. These responses either created wholly false responses or correctly answered the question but cited the wrong sources.

Written By
techspotai.com

Leave a Reply

Your email address will not be published. Required fields are marked *