how AI tricks you into believing everything it tells you

by time news

2023-06-23 06:05:17

The dizzying development of artificial intelligence aspires to radically transform the very life of the human being. Since the successful launch of ChatGPT at the end of last year, companies of all sizes and conditions have been racing ever faster to make tools capable of generating content on demand available to users. Today, anyone can use this technology to easily create text or images. Without the need to have the slightest knowledge about the dangers that these platforms hide. Among them, undoubtedly, the proliferation of biased and false information.

This is something that worries Celeste Kidd, a professor of psychology at the University of Berkeley, in California, a lot. The teacher, first signatory of an article published in Science in which the danger that machines could alter the beliefs of users is analyzed, he called attention, in conversation with ABC, to our tendency to believe at face value any data consulted on the internet.

“We tend to use systems that aim to provide simple and succinct answers to sometimes difficult questions. When we receive a clear and sure answer, the uncertainty is resolved and we continue with our lives. Regardless of how accurate the data is,” Kidd says. “The problem is that people don’t know what they don’t know. They can’t catch the bugs, because they’re querying models for answers they don’t know yet,” she continues.

The teacher explains in her article that the enormous popularity of systems such as ChatGPT and of machines capable of generating images from words, such as DALL-E or Midjourney, makes users more predisposed to accept the results they generate as good. The fact that the tools also do not hesitate to answer the queries they receive, that they do so with total security, does not help the user to hesitate either. And this, in the end, can affect both beliefs and people’s biases.

The article stresses the importance of conducting studies now, when the technology is still young, in order to better understand how these systems can deceive you and alter your beliefs. That they can do it, especially in those cases in which the Internet user is not aware that AI is still far, far from surpassing human capabilities.

far from infallible

We have said it many times: systems based on artificial intelligence do not invent anything. All the content they create is a direct product of the vast amount of information with which they have been trained by the developers. Some come directly from the users themselves who use the tools. The rest, from all corners of the internet.

Obviously, if the data that a person shares can contain errors, or biases, those created by a machine that has been trained with the same information will have the same problems. The systems are far from infallible. The problem is that the user, as Kidd points out, tends to accept the content they generate as good thanks to the security with which they offer their responses and the enormous expectation they generate. All this causes the person on the other side of the screen to tend to accommodate and accept what the system tells them as good. And that can be a serious problem. In fact, it already has been.

Just a few weeks ago, a New York lawyer turned to ChatGPT to find precedents to support a case he was working on. What happened? That he didn’t check the data that the machine spit out and ended up presenting a bunch of false information to the judge. And this is not an isolated case.

Last April, a mayor of an Australian town threatened to sue OpenAI, developer of ChatGPT after discovering that the machine was wrongly claiming that it had gone to jail for bribing foreign officials. In the United States, a university professor even threatened to suspend an entire class after an AI told him that students had copied his work.

“Money and ego”

“We need to study a lot, especially about how the hype surrounding these models affects the severity and transmission rate of false or biased content,” Kidd ditch. The teacher also stresses the importance of the companies behind these machines being transparent about the data with which they are trained and its origin. Something that, so far, has not been fulfilled. The author of the article is clear: technology must be regulated and users educated about the risks it hides.

«We must prioritize the education of society on what are and what are not the real capabilities of these technologies. Politicians, the media, and the public deserve better than trusting the words of model developers, whose interests are based on money and ego,” concludes the Berkeley professor.

#tricks #believing #tells

You may also like

Leave a Comment