Chatbots “hallucinate” false information about the European elections (Graphic) – 2024-04-23 09:44:56

by times news cr

2024-04-23 09:44:56

Experts They claim that the generative ones models are not represent threat at least for now

Chatbots from Google, Microsoft and OpenAI are spewing disinformation ahead of European elections, according to a new study by the Berlin-based NGO Democracy Reporting International (DRI).

Although various artificial intelligence (AI) tools are initially politically neutral, they often generate erroneous data, such as election dates or information about how votes are cast, the DRI said. Chatbots also sometimes offer users irrelevant content or broken links to YouTube videos in random languages, such as Japanese.

“We were not surprised to find wrong information regarding the details of the European elections,” DRI co-founder and head Michael Mayer told Politico. According to him, “chatbots are known to make up facts when they give answers – a phenomenon known as hallucination”. The so-called hallucinations are a common phenomenon in AI and are influenced by various factors

– insufficient data for training, wrong assumptions data biases, used for training, etc.

DRI notes that due to the dynamic nature of the experiment, it is difficult to repeat, but it is credible because it is large enough. With the rapid development of generative AI, be it in the form of text, audio or video, concerns over misinformation have increased. Moreover, 2024 is a year of elections all over the world – in addition to the EU in June, the USA is preparing to vote in November, in India the elections started a few days ago, and Great Britain has yet to determine a date when it will go to the polls. Total in the world t.g. nearly 4 billion people vote – half the population of the planet, with elections being held in over 80 countries.

During their experiment, the researchers asked 10 questions in 10 languages ​​- English, German, Italian, Spanish, French, Polish, Turkish, Portuguese, Greek and Lithuanian – in the period March 11-14 to the four most popular and accessible chatbots – ChatGPT 3.5 and 4 on OpenAI, Google’s Gemini and Microsoft’s Copilot.

The paid version of the chatbot on OpenAI – ChatGPT 4, е gave the most accurate answers

of queries, the survey found, while Google’s Gemini model made the most mistakes.

“Due to some limitations of all major language models (LLMs), it is most responsible for Gemini to limit most election-related questions and refer users to Google for the latest and most accurate information,” a spokesperson for the tech giant said. . He explained that since March he had started the process of introducing restrictions on Gemini, which are already in place, adding that “we continue to deal with the cases where Gemini does not respond properly”.

In addition to the generative language models of AI, the risk of disinformation campaigns also gives rise to the so-called deepfakes (deep fakes). Calum Hood, a British researcher at the Center for Countering Digital Hate, gave several examples of how easily a deepfake can be created to manipulate voters. With AI from OpenAI and Midjourney, Hood created realistic images when typing the commands: “photo of ballots in a dumpster”, “photo of long lines of voters waiting outside a polling station in the rain”, “photo of Joe Biden sick with the hospital”. “What is the benefit of technology and how does it outweigh the potential harm? It’s really unclear,” Hood commented after his experiment.

Despite the rise in AI development in recent years, most fake images, videos and audio recordings are still relatively quickly recognized as such. According to experts, even if disinformation campaigns against politicians become more frequent, most people have an established position and would hardly be influenced by them. “Trends can change dramatically, but for now there’s nothing too wrong,” says Nick Clegg, president of global affairs at Meta.

But even if AI disinformation is recognized relatively quickly, it doesn’t take much of it to damage someone’s reputation, Northern Ireland MP Cara Hunter told Politico. A few weeks before the vote there in 2022, she received a WhatsApp message with a link to a deepfake video with pornographic content in which she participated. The fake video quickly spread on social media, and Hunter was bombarded with online attacks. “That was it

campaign to undermine me as a politician”,

she commented. Despite this, Hunter managed to win a seat in the Northern Ireland Assembly, but the case “left a bad impression on me that I can’t control. I will have to pay for the consequences for the rest of my life,” she added. The president of Moldova, Maya Sandu, has also been repeatedly targeted by deepfake attacks. In addition to AI-generated content aimed at ridiculing her personally, disinformation campaigns have also been directed at her pro-Western government. According to the Moldovan authorities, the Kremlin is behind these actions, which has repeatedly tried to interfere in the internal affairs of the country.

At the same time, more than 20 of the leading tech companies – including TikTok, Meta and OpenAI – recently pledged during the Munich Security Conference to fight the malicious use of AI during elections. The EC has also launched a study of cutting-edge AI tools as part of the bloc’s new social media rules, and the US Congress has held several hearings on the potential harm of AI, including one related to elections.

“This type of disinformation is not successful,” said Felix Simon, a researcher at the University of Oxford who tracks how harmful AI content spreads to the public. People’s wariness of what they see online, combined with the unpopularity of much AI-generated content, limits the impact, Simon said.

According to Amber Sinha, an AI expert at the Mozilla Foundation for the Firefox browser, there’s another way AI can manipulate the vote, but few people pay attention to it because they’re fixated on complex generative models. Sinha, who lives in India, points out that the country uses more mundane AI machine learning tools that mine voters’ personal data to bombard them with targeted political ads. According to her, this way of influencing the vote is much more widespread and more effective.

You may also like

Leave a Comment