Hood, the Australian who threatens to sue ChatGPT for defaming him

by time news

Artificial intelligence (AI) has the potential to improve the world. But its use is by no means free of risks for society. If not, ask Brian Hood, mayor of the small Australian town of Hepburn Shire, who is threatening to file the first defamation lawsuit against OpenAI, the company behind the development of that machine capable of answering virtually any question called ChatGPT. . Another very different thing is that the data shared by the ingenuity is true or, simply, misinformation. The case began after the politician discovered that the ‘smart chatbot’ was claiming in its responses to users that he was sentenced in 2011 to 30 months in prison as guilty in a high-profile case of bribing foreign officials from Malaysia and Indonesia. The problem? That, in reality, Hood was not convicted by any court. He was not even among the defendants in the case. He was the complainant. “(When I found out about the mistake) I was stunned. Because it was so wrong, so wildly wrong, that I just felt amazed. And then I got very angry about it, “said the affected person in statements to the ‘Sydney Morning Herald’. According to the Australian media, the mayor’s lawyers sent a complaint to OpenAI at the end of last month that, for the moment, has not received a response. If the case goes ahead, and goes to trial, it will become the first to face an AI-powered chatbot due to the spread of disinformation. It could also help determine whether companies have any legal liability for misleading results their machines may deliver. Related News standard No Italy blocks ChatGPT “with immediate effect” for not respecting data protection legislation RA / AFP standard No Elon Musk charges Bill Gates: “his understanding of artificial intelligence is limited” RA Regarding ChatGPT, OpenAI itself has recognized on several occasions that the technology that drives it is not perfect. That it can make mistakes and show users false information. Even the latest version of the AI, released just a few weeks ago, has this problem. “I am concerned that these models could be used for large-scale disinformation,” Sam Altman, executive director of OpenAI, recently acknowledged in an interview with ‘ABC’. In the end, ChatGPT, like the rest of similar tools, does not invent anything. Not a word. All the data comes from the Internet: from scientific ‘papers’ to publications on Facebook boards. If the human being who has made the publication with which the machine has been trained is wrong, so is the machine. And this goes for the case of Brian Hood, but also for many others. We have already seen systems powered by the same technology hallucinate, even declaring their love to the user or threatening to ‘hack’. Even Europol, as well as several cybersecurity companies, have warned about the danger of technology being exploited en masse by cybercrime. Concern about the misuse of AI, which does not have any type of regulation, is already beginning to permeate among legislators. Both in the EU, where a regulation is being worked on that should see the light of day before the end of the year, and in the US, where ChatGPT is being investigated to determine whether its launch violated federal consumer laws. All because the tool did not pass any independent evaluation before it began to be available to users. Precisely, more than a thousand businessmen and academics signed an open letter last week in which they request a halt in the development of new AI until security standards are established.

You may also like

Leave a Comment