Was The Fugees Rapper Pras Sentenced For An AI Hallucination?

by time news

2023-10-22 04:31:06
In Summary Pras Michel accused an AI of sabotaging his multimillion-dollar fraud case, which EyeLevel’s Neil Katz denies. EyeLevel created an AI trained on court transcripts to help lawyers with complex cases. The motion for a new trial was filed by Michel’s new attorney due to AI-generated arguments that he considers frivolous.

Artificial intelligence is making its way into all aspects of life, including the American legal system. But as the technology becomes more ubiquitous, the problem of AI-generated lies or nonsense, also known as “hallucinations,” remains.

These AI hallucinations are at the center of claims by former The Fugees member Prakazrel “Pras” Michel, who accused an AI model created by EyeLevel of sabotaging his multimillion-dollar fraud case, a claim EyeLevel co-founder and COO , Neil Katz, calls it false.

In April, Michel was condemned on 10 counts in his conspiracy trial, including obstructing witnesses, falsifying documents and acting as an unregistered foreign agent. Michel faces up to 20 years in prison after his conviction as an agent of China, as prosecutors said he funneled money to try to influence American politicians.

“We were hired by Pras Michel lawyers to do something unique, something that had not been done before,” Katz told Decrypt in an interview.

According to a report According to the Associated Press, during Michel’s attorney’s closing arguments at the time, defense attorney David Kenner incorrectly quoted a lyric from the song “I’ll Be Missing You” by Sean “Diddy” Combs, incorrectly attributing the song to The Fugees.

As Katz explained, EyeLevel was tasked with building an AI trained on court transcripts that would allow lawyers to ask complex questions in natural language about what happened during the trial. He said he didn’t get other information from the Internet, for example.

Court proceedings notoriously generate tons of paperwork. The criminal trial of FTX founder Sam Bankman-Fried, which is still ongoing, has already generated hundreds of documents. Separately, the bankruptcy of the cryptocurrency exchange crash has more than 3,300 documents, some of them dozens of pages long.

“This is a complete game-changer for complex litigation,” Kenner wrote in an EyeLevel blog post post. “The system turned hours or days of legal work into seconds. This is a glimpse into the future of how cases will be conducted.”

On Monday, Michel’s new defense attorney, Peter Zeidenberg, filed a motion – published online by Reuters – for a new trial in the United States District Court for the District of Columbia.

“Kenner used an experimental artificial intelligence program to write his closing argument, which made frivolous arguments, mixed up the schemes, and failed to highlight key weaknesses in the government’s case,” Zeidenberg wrote. He added that Michael is seeking a new trial “because numerous errors, many of them precipitated by his ineffective defense counsel at trial, undermine confidence in the verdict.”

Katz refuted the claims.

“It didn’t happen like they say; this team has no knowledge of artificial intelligence or our product in particular,” Katz told Decrypt. “Their statement is full of misinformation. I wish they had used our AI software; they might have been able to write the truth.”

Michel’s lawyers have not yet responded to Decrypt’s request for comment. Katz also refuted claims that Kenner has a financial interest in EyeLevel, saying the company was hired to assist Michel’s legal team.

“The allegation in your filing that David Kenner and his associates have some sort of secret financial interest in our companies is categorically false,” Katz told Decrypt. “Kenner wrote a very positive review about the performance of our software because he felt that was the case. He was not paid for it; he was not given shares.”

Launched in 2019, Berkley-based EyeLevel develops generative AI models for consumers (EyeLevel for CX) and legal professionals (EyeLevel for Law). As Katz explained, EyeLevel was one of the first developers to work with OpenAI, the creator of ChatGPT, and said the company aims to provide “truthful” AI tools, meaning hallucination-free and robust tools for individuals and legal professionals. who may not have access to funds to pay a large team.

Typically, generative AI models are trained with large data sets collected from various sources, including the Internet. What makes EyeLevel different, according to Katz, is that this AI model is trained only on court documents.

“The AI ​​was trained exclusively on the transcripts, and exclusively on the facts presented in court, by both sides and also what the judge said,” Katz said. “And when you ask this AI questions, it provides answers based solely on facts, without hallucinations, based on what has happened.”

Despite how an AI model is trained, experts warn about the program’s tendency to lie or hallucinate. In April, ChatGPT falsely accused US criminal defense attorney Jonathan Turley of committing sexual assault. The chatbot even went so far as to provide a fake link to a Washington Post article to back up his claim.

OpenAI is investing heavily in combating AI hallucinations, even hiring third-party security testing teams to test its AI toolset.

“When users sign up to use the tool, we strive to be as transparent as possible that ChatGPT may not always be accurate,” OpenAI says in its website. “However, we recognize that there is much more work to do to further reduce the likelihood of hallucinations and educate the public about the current limitations of these AI tools.”

Edited by Stacy Elliott and Andrew Hayward

Stay on top of crypto news, get daily updates in your inbox.

#Fugees #Rapper #Pras #Sentenced #Hallucination

You may also like

Leave a Comment