Machines lie too, and they do it better than humans

by time news

2023-07-27 20:00:00

This summer, subscribe to National Geographic for only €5/month and get three special GIFTS that cannot be missing in your suitcase.

Enjoy our special editions at a single price for being a subscriber.

From analyzing large amounts of data, recognizing patterns or making predictions, through dozens of applications in the fields of health and engineering, to the development of generative models of language or images, artificial intelligence is transforming our world, and it is doing it at a speed never seen in any other type of technology.

This rapid establishment of AI in various facets of life, however, has made many wonder about some of the drawbacks associated with Artificial Intelligence. So much so that, from some areas, the possibility that machines could eventually end human civilization itself has even been raised.

Can artificial intelligence end human civilization?

The approach of the team John Spitale, researcher at the Institute for Biomedical Ethics and History of Medicine at the University of Zurich, does not pose as radical a threat as the one posed by some tech gurus like Elon Musk, however, it exposes an issue to which it is convenient to pay special attention: the ability of machines to lie and generate misinformation.

And it is that since its launch in November 2022, the widespread use of the ChatGPT chatbot has generated a public concern about the spread of misinformation and disinformation online, particularly on social media platforms. Thus, the sector most critical of this technology has warned on several occasions that generative language models such as GPT-3 and its successor, GPT-4, could be used to generate convincing disinformation, since due to its novelty in the sphere public, very few studies have examined how effective they might be for this purpose.

To try to shed some light on this issue, the Spitale team carried out a study with 700 participants in order to analyze their ability to judge the veracity of the information provided by Artificial Intelligence models in social networks. , as well as to differentiate whether said information had been provided by a human being or a machine.

AI and the war for Internet dominance

Among their conclusions, the researchers found that most of the subjects had problems distinguishing between tweets written by humans and those generated by an AI. They also found notable Difficulty discriminating between true and false information on a variety of topics ranging from vaccines and autism, to 5G technology and COVID-19, to climate change and evolution, all of which are frequently subject to public misconceptions.

False information generated by AI models is more likely to be assumed to be true.

In fact, according to Spitale, participants were more likely to identify disinformation generated by other humans than that generated by GPT-3; or something that is even more disturbing: that false information generated by AI models is more likely to be assumed to be true.

These findings imply that AI models such as GPT-3 or other generative language models can effectively inform social media users, but that they can also misinform even more effectively than humans. As Spitale states: “Our findings raise important questions about the potential uses and abuses of GPT-3 and other advanced AI text generators, and about the implications for the dissemination of information in the digital age.”

Until very recently, machines had always been just that: machines. Now, however, Artificial Intelligence is endowing these machines with very human qualities, such as lying, to the point that dealing with a machine, perhaps, in the future can become a matter of trust: the same one that inspires you your neighbor or an opinion columnist.

#Machines #lie #humans

You may also like

Leave a Comment