Lawyer Admits To Using ChatGPT
A New York lawyer, StevenSchwartz, has admitted in court that his firm used AI tool ChatGPT for legal research. To verify the authenticity of the cases, Schwartz asked the chatbot if it was providing false information.
In response, the AI apologised for any prior confusion and reassured Schwartz that the cases were genuine, even suggesting they could be found on legal research platforms such as Westlaw and LexisNexis. Satisfied with the chatbot's response, Schwartz concluded that all the referenced cases were legitimate.
Now, the trial judge said the court was faced with an "unprecedented circumstance" after a filing was found to reference example legal cases that did not exist.
The original case involved a man suing an airline over an alleged personal injury. His legal team submitted a brief that cited several previous court cases in an attempt to prove, using precedent, why the case should move forward.
Later it emerged that the research was not done by the man's lawyer Peter LoDuca but by one of his colleagues at the law firm. Steven Schwartz, a lawyer with more than 30 years of experience, used the AI tool to find cases that were comparable to the one at hand.
In a statement, Schwartz said that LoDuca was involved in the research and was unaware of how it was conducted. He said that he "greatly regrets" using ChatGPT and added that he had never used it for legal research before.
Millions of people have used ChatGPT since it launched in November 2022, exploring its ability to seemingly answer questions in natural, human-like language and imitate different writing styles. Schwartz claimed to be "unaware that its content could be false". He pledged never again to "supplement" his legal research using AI "without absolute verification of its authenticity".
ChatGPT creates original text on request, but comes with warnings it can "produce inaccurate information".
The original case involved a man suing an airline over an alleged personal injury. His legal team submitted a brief that cited several previous court cases in an attempt to prove, using precedent, why the case should move forward.
But the airline's lawyers later wrote to the judge to say they could not find several of the cases that were referenced in the brief.
"Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations," Judge Castel wrote in an order demanding the man's legal team explain itself and Schwartz was obliged to promise never to use AI to "supplement" his legal research in future "without absolute verification of its authenticity".
Business Today: BBC: NDTV: ChatGPT-4: Telegraph: Flipboard: Yahoo
You Might Also Read:
AI Will Be Disruptive - For Both Security & Jobs:
___________________________________________________________________________________________
If you like this website and use the comprehensive 6,500-plus service supplier Directory, you can get unrestricted access, including the exclusive in-depth Directors Report series, by signing up for a Premium Subscription.
- Individual £5 per month or £50 per year. Sign Up
- Multi-User, Corporate & Library Accounts Available on Request
- Inquiries: Contact Cyber Security Intelligence
Cyber Security Intelligence: Captured Organised & Accessible