A lawyer used ChatGPT in court. Now it is he who must give explanations to a judge for including false quotes

by time news

2023-05-28 11:06:32

That ChatGTP is an amazing tool, capable of passing university exams and even improving productivity in our offices, is no surprise at this point. Which is far from foolproof, either. Steven A. Schwartz, a lawyer with an office in New York, has just verified it in the worst imaginable way: with a resounding professional slip-up that has left him in evidence before the judge in the case he is working on and that may now even lead to a sanction. Of the case it has echoed The New York Times.

All for relying excessively on the OpenAI tool.

To understand what happened to Schwartz, one must go back to the August 27, 2019 and a place that has little to do with courts or offices: a plane of the airline Avianca. That day Roberto Mata was traveling aboard one of the Colombian firm’s aircraft that connects El Salvador and New York when, in mid-flight, one of his flight attendants hit him on the knee with a metal cart. Upset with what happened, Mata decided to file a lawsuit against Avianca.

“An unreliable source has been revealed”

The company asked the federal judge in Manhattan to dismiss the case, but Mata’s lawyers opposed it. And emphatically, too. To argue why the lawsuit should go forward, the attorney working on the Mata case, Steven A. Schwartz, of Levidow, Levidow & Oberman, produced a painstaking 10-page report citing half a dozen legal precedents.

The problem is that that detailed string of judicial decisions, appointments and names was more science fiction than a proven legal report. When Avianca’s lawyers began to search for the references listed in Schwartz’s letter, they were unable to locate them. Not even the magistrate could.

Normal.

The content was inaccurate or even made up.

More than its lack of veracity, the striking thing about the report is who was the author. The person responsible for such a display of judicial fiction was not Schwartz —at least not directly— but the tool he used for his report: ChatGPT. As the lawyer has ended up admitting, he resorted to the OpenAI engine to speed up his legal investigation, a decision that, given what happened, has not been very wise. “It has been revealed as unreliable”, admit.

What happened is so striking (and serious) that it has shifted the focus of attention from the Mata vs. Avianca case to focus directly on the handling that Schwartz has made of artificial intelligence. And its disastrous consequences, of course.

The magistrate in the case, Fr. Kevin Castel, acknowledges meeting faced with “unprecedented circumstances”, with a legal report peppered with erroneous references: “Six of the cases presented appear to be false court decisions, with false citations and false references.” The New York Times details that he has already set a hearing for June 8 in which possible sanctions will be discussed.

Schwartz, a veteran with three decades of experience practicing law in New York, has already filed a sworn declaration in which ensures that his intention was not to mislead the court or the airline. Quite simply, he explained to her, she was the first to use ChatGPT and was overconfident: “I was not aware of the possibility that its content could be false.” What’s more, it has already guaranteed that it will never trust OpenAI AI “without full verification” again.

DoNotPay will be the first AI to defend a defendant as a lawyer.  Justice does not see it with good eyes

As if the case were not bizarre enough in itself, there is another striking fact. It’s not just that ChatGPT provided incorrect names and citations, it’s that —says Schwartz— it even offered guarantees that its information was true. Lawyer consulted with him chatbot the veracity of the references. And his answers seem to affect the errors. “Are the other cases you provided fake?” Schwartz asks, to which the OpenAI AI responds, “No, the other cases I provided are real and can be found in reputable databases.”

Good answer. Bad information. worst result.

Cover image: Blogtrepreneur (Flickr)

In Xataka: JPMorgan is creating its own ChatGPT: one that will give you investment recommendations


#lawyer #ChatGPT #court #give #explanations #judge #including #false #quotes

You may also like

Leave a Comment