The use of artificial intelligence gets a clueless lawyer in trouble / Bright

by time news

2023-06-25 12:01:55

Digital tools have simplified our lives and have become allies in our daily tasks. Although artificial intelligence has proven its usefulness in many fields, it is important not to abuse it and to avoid relying on its ability to do all the work. The protagonist of this story learned that valuable lesson the hard way.

In Manhattan, United States, the name of the lawyer Steven A. Schwartz began to sound very recently. The trouble started after Schwartz’s law partner, Peter LoDuca, presented a lawsuit against a Colombian airline on behalf of a customer, who was allegedly injured when a metal cart struck his knee on a flight to New York City.

When the airline’s lawyers asked the court to throw out the lawsuit, Schwartz wrote a brief containing more than half a dozen similar cases.

The document meant to cases against American, Korean, and Chinese airlines, each with their respective analysis of federal law, and which strongly supported their case.

There was only one drawback: none of those involved, not even the airline’s lawyers, let alone the judge, managed to find the decisions or the quotes mentioned and summarized in the brief.

This had happened because the artificial intelligence tool ChatGPT had created everything in a fictitious way.

“The court is facing an unprecedented situation,” Manhattan federal judge P. Kevin Castel wrote in a May 4 document, asking Schwartz and LoDuca to appear at a hearing on June 8 to face possible penalties for producing documents. of doubtful veracity.

“Six of the cases mentioned appear to be fictitious court decisions with false citations and incorrect internal references,” wrote Castel.

Is according reportsduring the hearing the lawyer tried several times to explain why he did not investigate further the cases that ChatGPT had provided him.

“God, how I wish I had and I didn’t,” Schwartz said, adding that he was ashamed, humiliated and deeply sorry. “I didn’t understand that ChatGPT could fabricate cases,” he told Judge Castel.

When Schwartz responded to the judge’s questions, the reaction in the courtroom, packed with about 70 people including lawyers, law students, paralegals and professors, was full of laughter and sighs.

“I kept getting tricked by ChatGPT. It’s embarrassing,” Schwartz said.

This episode, which arose in a not-so-complicated lawsuit, has fascinated the world of technology, where there has been a growing debate about the dangers of artificial intelligence, even being considered a threat to humanity.

“I heard about this new site, which I falsely assumed was like a super search engine,” said schwartz. He was clearly wrong.

Schwartz’s affidavit on Wednesday contained screenshots of the lawyer trying to confirm the authenticity of the case with the tool.

“Is Varghese a real case?”, asked Schwartz to the chatbot. “Yes”, reaffirmed ChatGPT, “it is a real case”.

Schwartz then asked for the source. The chatbot again claimed that the fake case was real.

“I apologize for the earlier confusion,” ChatGPT replied. “After double checking, I found that the Varghese case does exist and can be found in legal research databases. I apologize for any inconvenience or confusion my previous responses may have caused.”

When Schwartz asked the chatbot if any other cases were fake, ChatGPT replied that the other cases “are real” and can be found in “reputable legal databases.”

“I deeply regret my actions,” Schwartz said in court. “I have suffered both professionally and personally due to the widespread publicity. I feel ashamed, humiliated and extremely sorry. To say that it has been a humiliating experience is an understatement.”

Schwartz testified that “he was never close to being sanctioned in any case or court.” In addition, he mentioned having completed a continuing legal education course in artificial intelligence.

ChatGPT’s release in 2022 garnered worldwide attention due to its ability to generate human-like responses, based on what it has learned from scanning vast amounts of information online.

Faced with emerging concerns, European lawmakers have moved quickly in recent months to include provisions on AI systems as they prepare final legislation.

In the wake of the publicity surrounding Schwartz’s case, a Texas judge issued an order prohibiting the use of artificial intelligence to draft court documents without additional fact-checking by a real person.

As AI becomes more sophisticated and adopted in various arenas, risks related to the fabrication of data and the spread of misinformation may also arise.

Schwartz’s case serves as a reminder and a warning to those seeking to lean on AI: human oversight remains essential to ensure the accuracy, reliability, and integrity of information in the world we live in.

Artificial intelligence can be a good tool if it is used correctly and the data it provides is verified, since it is not infallible. In other applications, AI is also used to generate incredible images, we have used it to imagine what some celebrities would look like today, what celebrities would look like in the 19th century, and how some characters would look if we changed the actors.

#artificial #intelligence #clueless #lawyer #trouble #Bright

You may also like

Leave a Comment