So are the robots that can speak for us after our death

by time news

Sara Suarez-Gonzalo

Updated:05/24/2022 03:26h

Save

Automatic machine learning systems (machine learning) sneak more and more into our daily lives, challenging our moral and social values ​​and the norms that govern them.

Today, virtual assistants threaten the privacy of the home; news recommendation systems shape the way we understand the world; risk prediction algorithms advise social workers which children to protect from abuse; while recruitment tools based on data processing classify our chances of getting a job. However, the ethics of machine learning remains a fuzzy field.

Looking for articles on the subject for young engineers studying Ethics and Information and Communication Technologies at UCLouvain (Belgium), I was particularly struck by the case of Joshua Barbeau, a 33-year-old man who used a website called Project December to create a conversational robot – a chatbot – that would simulate a conversation with his fiancée, Jessica, who died of a rare disease.

Robots that imitate dead people

known as deadbotthis type of chatbot it allowed Barbeau to exchange text messages with an artificial “Jessica.” Despite the ethically controversial nature of the case, I rarely found material that went beyond the mere factual aspect and analyzed it from an explicitly normative perspective: why would it be right or wrong, ethically desirable or reprehensible, to develop a deadbot?

Before we tackle these questions, let’s put things in context: Project December was created by video game developer Jason Rohrer with the goal of allowing people to design chatbots with whatever personality they wanted to interact with, as long as they paid for it. The project was built on a GPT-3 API, a language model that allows automatic text generation, from the artificial intelligence research company OpenAI.

Barbeau’s case sparked a dispute between Rohrer and OpenAI, since the company’s usage guidelines explicitly prohibit GPT-3 from being used for sexual, love, self-harm or harassment purposes.

Calling OpenAI’s position hypermoralistic and arguing that people like Barbeau are “consenting adults,” Rohrer pulled the plug on the GPT-3-based version of Project December.

Although we all have certain intuitions about whether it is right or wrong to develop a machine learning deadbot, explaining its implications is not an easy task. That is why it is important to address the ethical issues that raises the case, step by step.

Is Barbeau’s consent enough?

Since Jessica was a real (albeit dead) person, Barbeau’s consent to the creation of a deadbot to mimic her seems insufficient. Even when they die, people are not mere things with which others can do as they please. For this reason, our societies consider it wrong to profane or be disrespectful to the memory of the dead. In other words, we have certain moral obligations relating to the deadto the extent that death does not necessarily imply that people cease to exist in a morally relevant way.

Likewise, the debate is open on whether we should protect the fundamental rights of the dead (for example, privacy and personal data). Developing a deadbot that replicates someone’s personality requires large amounts of personal information, such as data from their social networks (see what Microsoft or Eternime propose), which can reveal very sensitive traits.

If we agree that it is unethical to use people’s data without their consent while they are alive, why would it be after their death? In that sense, when developing a deadbot it seems reasonable to request the consent of the person whose personality is imitated, in this case Jessica.

When the imitated person gives the green light

Thus, the second question is: would Jessica’s consent be enough to consider the creation of her deadbot ethical? What if it was degrading to her memory?

The limits of consent are indeed a controversial issue. Let us take as a paradigmatic example the Rothenburg cannibal, who was sentenced to life imprisonment despite the fact that his victim had agreed to be eaten. In this sense, it has been argued that it is unethical to consent to things that may be harmful to ourselves, either physically (selling vital organs) or in a more abstract way (alienating one’s rights).

In what sense can something be harmful to the dead is a particularly complex question that I will not analyze in detail. However, it should be noted that while it is not possible to harm or offend the dead in the same way as the living, this does not mean that they are invulnerable to bad deeds, nor that they are ethical. The dead may suffer damage to their honor, reputation or dignity (for example, if posthumous smear campaigns are launched), and disrespect for the dead also harms those close to them. In addition, having bad behavior towards the dead leads us to a more unjust and less respectful society with the dignity of people, in general.

Finally, given the malleability and unpredictability of machine learning systems, there is a risk that the consent provided by the imitated person (in life) does not represent much more than a blank check before its possible evolution.

Taking all this into account, it seems reasonable to conclude that if the development or use of the deadbot does not correspond to what the imitated person has agreed to, their consent should be considered invalid. Furthermore, if it clearly and intentionally violates her dignity, even her consent should not be enough to consider it ethical.

Who bears the responsibility?

A third question is whether artificial intelligence systems should aspire to mimic any kind of human behavior (irrespective of the discussion here about whether this is possible).

This is a longstanding concern in the AI ​​field and is closely related to the dispute between Rohrer and OpenAI. Should we develop artificial systems capable, for example, of exercising care or making political decisions? It seems that there is something about these abilities that differentiates humans from other animals and from machines. For this reason, it is important to bear in mind that instrumentalizing AI towards excessively techno-solutionist ends, such as replacing loved ones, can lead to a devaluation of what characterizes us as human beings.

The fourth ethical question is who is responsible for the results of a deadbot, especially in the event that it has undesirable effects.

Imagine Jessica’s deadbot autonomously learning to act in a way that degrades her memory or irreversibly damages Barbeau’s sanity. Who would take responsibility?

AI experts answer this slippery question from two main approaches: some consider that the responsibility falls on those who participate in the design and development of the system, insofar as they do so in accordance with their particular interests and worldviews; others understand that, since machine learning systems are necessarily context-dependent, the moral responsibility for their results must be distributed among all the agents that interact with them.

The first position is more in line with my ideas. In this case, since there is an explicit co-creation of the deadbot in which OpenAI, Jason Rohrer and Joshua Barbeau participate, it seems logical to analyze the level of responsibility of each party.

First of all, it would be difficult to hold OpenAI accountable after having explicitly prohibited the use of its system for sexual, love, self-harm or harassment purposes.

However, it seems reasonable to attribute a significant level of moral responsibility to Rohrer because: (a) he explicitly designed the system that allowed the deadbot to be created; (b) he did so without anticipating measures to avoid possible harmful results; (c) he was aware that he was in breach of OpenAI guidelines; and (d) benefited financially from it.

And third, since Barbeau customized the deadbot based on Jessica’s particular traits, it seems legitimate to hold him co-responsible in case his memory was degraded.

Ethical, under certain conditions

So, going back to our first question about whether it’s ethical to build a machine learning deadbot, we could give an affirmative answer under the condition that:

1.- Both the imitated person and the person who personalizes the deadbot and interacts with it give their free consent before a description as detailed as possible of the design, development and uses of the system;

2.- Developments and uses that do not adhere to what the imitated person consented to or that go against their dignity are prohibited;

3.- The people involved in its development and those who benefit from it assume responsibility for its possible results (especially if they are negative). Both retroactively, to account for the events that have occurred, and prospectively, to actively prevent future occurrences.

This case exemplifies why the ethics of machine learning matter. It also illustrates why it is essential to open a public debate capable of better informing citizens and helping us develop policies that make AI systems more open, socially fair and respectful of fundamental rights.

See them
comments

You may also like

Leave a Comment