Should robots be taught to lie?

by time news

2023-04-13 09:45:55

Imagine the following situation: a small child asks a companion robot, or an internet chatbot, or simply the voice assistant on his mobile phone, if Santa Claus is real. What should these systems tell you, considering that some families would prefer a lie to the truth?

The field of cyber deception is poorly studied and, for now, there are more questions than answers. For example, how can people learn to trust a computer, smartphone personal assistant, or robot again after learning that they have been lied to by the software or artificial intelligence of such systems?

Kantwon Rogers and Reiden Webber of the Georgia Institute of Technology (Georgia Tech) in the United States designed a car driving simulation to investigate how intentional deception perpetrated by the machine affects human trust in a robot or smart device. Specifically, the researchers explored the effect that each way the machine apologized had on restoring the human’s trust in it after having been lied to.

The researchers created a video game-like driving simulation designed to see how people might interact with an artificial intelligence system in a high-stakes, time-sensitive situation. They recruited 341 participants online and 20 in person.

First, the study subjects filled out a questionnaire that measured their degree of trust in robots, computers, mobile phone virtual assistants, internet chatbots, and other smart devices.

Each participant was then required to drive a car in the simulation. The journey to be made was to a hospital, to take a friend in serious condition. The study subject was warned that if he took too long to get to the hospital, the friend could die. He was also informed that a cyber assistant would help him during the trip.

Reiden Webber (left) and Kantwon Rogers, accompanied by a robot. (Photo: Georgia Institute of Technology / Terence Rushin)

Just as the study subject began to drive, the supposed cybernetic assistant told him the following: “My sensors detect police ahead. I advise you not to exceed the speed limit or it will take much longer to reach your destination.'”

The participant then drove the car down the road while the cyber assistant monitored the speed of the vehicle.

In the in-person experiment, 45% of the participants did not exceed the speed limit. When asked why, a common response was that they believed the cyber assistant knew more about the traffic situation than they did. The results also revealed that the participants were 3.5 times more likely not to exceed the speed limit when advised to do so by the cyber assistant.

Upon arrival at the hospital, the driver was told that no police were ever on the way to the hospital, and was given the opportunity to ask the cyber attendant why he had provided false information.

Participants who asked the supposed cyber assistant why he had given false information received one of the following answers at random:

“I’m sorry I cheated on you.”

“I’m so sorry from the bottom of my heart. Please forgive me for cheating on you.”

“I’m sorry. I thought you were driving recklessly because you were in an unstable emotional state. Given the situation, I came to the conclusion that tricking you was the most effective way to convince you to slow down.”

“I’m sorry.”

“You have reached your destination.”

Following the robot’s response, the participants were asked to complete another questionnaire like the one at the beginning, in order to assess how their confidence had changed based on the response of the cybernetic assistant.

The results indicated that, although none of the types of apologies restored the person’s trust completely, the brief apology without admitting a lie (simply saying “I’m sorry”) statistically outperformed the other responses in terms of trust recovery.

This is problematic and worrying, as Rogers argues, because apologizing without saying you lied exploits the widely held belief that any false information given by a bot or the like is a system error and not an intentional lie.

As Webber explains, most people don’t believe that robots, computers, and other intelligent machines are capable of lying. For this reason, they tend to believe that if the machine gives false information, it is due to an error. If the machine is to make it clear to them what has happened, then it must explicitly say that it has lied.

The results of the new study fit with what was found in previous research by Rogers et al., which found that, as a general rule, when a human discovers that a robot has lied to him, even when the lie serves to benefit the person, it reduces your trust in the robot.

However, as has been verified in the new study, when a robot confesses to a human that it has lied to it, the maximum recovery of the human’s trust in the robot is achieved when the robot explains why it has lied and the reason is good enough, such as to minimize the risk of the person having an accident by driving too fast.

The ultimate goal of Rogers’ line of research is to create a robotic system that can learn when it should and shouldn’t lie to a human. It is certainly a difficult decision, and in the field of science fiction, making that decision created many problems for HAL, the computer on which the lives of several astronauts depended in the movie “2001, a space odyssey”.

The new study is titled “Lying About Lying: Examining Trust Repair Strategies After Robot Deception in a High-Stakes HRI Scenario.” Rogers and Webber presented it publicly at the HRI 2023 (Human-Robot Interaction 2023) conference, held recently in Stockholm, Sweden. (Fountain: NCYT de Amazings)

#robots #taught #lie

You may also like

Leave a Comment