The ‘Joi effect’ or the emotional risk of humanized artificial intelligence

by time news

2024-08-19 05:56:20

Click here to listen to the news.

Joi is just a flash of heat amidst the loneliness of the machines. Projected as an ethereal figure, this woman is the perfect companion for any lonely man: sweet, thoughtful and always present.

It provides comfort, companionship, and even the illusion of true love. Joi does the job she was created for well. Artificial intelligence is its star Blade Runner 2049a 2017 film directed by Denis Villeneuve that explores how easily the human brain can mix reality with fiction to suit our emotional shortcomings.

In the story, Agent K forms a deep bond with his assistant and close friend. At the film’s climax, his life falls apart in seconds when a small transmitter is destroyed, a heartbreaking reminder of how fleeting an emotional connection can be to what is only a digital illusion in a cold, mechanized world.

Today, it seems like a cinematic fantasy (or nightmare) is slowly becoming a tangible reality. This week, the creators of ChatGPT OpenAI published the results of a study on the performance of their GPT-4o model, launched last May, warning that there is a “medium risk” of its persuasiveness, which could generate emotional dependence in users , triggered by responses that mimic human spontaneity and warmth in a matter of milliseconds.

With its ability to interpret and respond to voice and text stimuli almost instantly, GPT-4o seems to be getting closer to emulating the unconditional companionship that characters like Joi exhibit.

Another recent study reinforces this concern. In March of this year, a group of scientists showed that artificial intelligence is already beginning to occupy a place in our lives that previously belonged exclusively to human relationships.

During the International Conference on Intelligent User Interfaces in South Carolina, Mateusz Dubiel, Anastasia Sergeeva and Luis A. Leiva from the University of Luxembourg explained how the qualities of synthetic voices in user interfaces can influence people’s decisions.

Traditionally, studies have focused on the “dark patterns” of graphical interfaces, but their research explored how speech can also be handled, finding that synthetic voice characteristics can significantly influence human decisions.

Anthropomorphization and emotional dependence

OpenAI took note and included the conclusions of that research on its official blog as part of a serious warning about the danger of the anthropomorphization of artificial intelligence. That is, the act of attributing typical human behaviors and characteristics to non-human entities, such as AI models. In the case of GPT-4o, this risk is intensified by its audio capabilities, which allow for more human interactions with the model.

According to them, during the initial tests of GPT-4o and its voice interface, they noticed that some users started to create emotional connections with the platform, using expressions that imply a personal connection, such as the phrase “This is our last day. together,” said one of the participants.

The company led by Sam Altman emphasizes “the need” to study in depth how these effects may intensify over time and affect mental health. According to them, socialization with an AI model can “produce externalities that interfere with interactions between people” causing, for example, users to reduce their need for human interaction, which affects healthy relationships .

How to protect our feelings?

Various institutions are already studying the consequences of human interaction with algorithms. In Medellín, Mariana Gómez directs the Laboratory of Neuroscience and Human Behavior at the Universidad Pontificia Bolivariana, where they discovered that “the neural circuits that are activated when we are in a process of interaction or falling in love with another person like the activate the same by interacting with a chatbot”.

As he explained to EL COLOMbianO, the use of digital technologies can influence social development, especially in teenagers, since “the use of these devices can have a great impact on young people without developing the necessary strategies to face social life.”

This would lead to the anthropomorphization of parallel realities that, when faced with the real one, leave humans without the tools necessary to manage real human relationships, as OpenAI suggests.

Professor UPB also addressed the possible consequences of this trend on our relationships and our feelings. “Today we begin to see how solid social structures are becoming liquid,” he explained,

Citing the work of philosophers such as Zygmunt Bauman and Byung-Chul Han, he notes that “the uncertainty created by technology creates anxiety and questioning about the future, which affects our relationships and our perception of the world.”

In this way, he offered a key recommendation to protect mental health when using technologies like ChatGPT: “Technology should be a cognitive or emotional bridge that facilitates human interaction, not replace it. “AI can help improve texts and give advice, but we must always start from our own base and not let it become the executor of ideas.”

Gómez also addressed the responsibility of technology companies, since “they receive advice from experts who know perfectly how the brain works and the scope of their products,” which is why he considers that it is necessary to draw boundaries “to avoid possible manipulation or application. ideas.””.

The Eliza effect

In the 1960s, it was created by MIT professor Joseph Weizenbaum Elizaa therapist-imagined program using predefined responses. Although his means were limited, Eliza he showed how people can project emotion and personality onto a machine.

This phenomenon, baptized as the ‘Eliza effectit reflects our tendency to attribute human characteristics to systems that merely replicate language patterns.

Today, with the sophistication of models like GPT-4o, the Eliza effect becoming more relevant again and chatbots seem to bring us closer and closer to a possible ‘Joi effect’; Just as in Blade Runner 2049, AI reinforces the illusion of real emotional connection, raising new questions for science and challenges for how we manage our interactions with technology.

#Joi #effect #emotional #risk #humanized #artificial #intelligence

You may also like

Leave a Comment