On the request for a moratorium on advanced AI research

by time news

At the end of last March, an open letter appeared on the networks, promoted by the Future of Life Institute, calling for a six-month moratorium on “big AI experiments,” especially “training of systems more powerful than GPT-4.” The Institute from which this initiative is taken is a defender of ideas close to transhumanist approaches and the philosophy known as “long-termism”, and this explains why the letter contains some debatable statements and that not all signatories would accept, as some have already explained. For example, when you ask, “Should we develop non-human minds that might, over time, outnumber us, out-intelligence, become obsolete, and replace us?” Should we risk losing control of our civilization? This possibility is what transhumanists call “the Singularity” and are convinced it will happen sooner rather than later, but it is a possibility greeted with skepticism by many AI specialists, including signatories to the letter such as Gary Marcus o Ramón López de Mántaras.

The letter has received harsh criticism from the get-go. There are those who think that it is pure propaganda or, on the contrary, that it tries to instill fear about a technology that, like all technologies, no matter how much it arouses suspicion at first, will bring profound benefits to humanity. whims unplayability, after all. Thus, a few days after publication, on April 2 Yann LeCun, one of the most lucid and prudent specialists in AI, chief scientist in Meta and recent Princess of Asturias award, wrote on his Facebook wall: «It is not innocent. By amplifying human intelligence, AI may cause a new Renaissance, perhaps a new phase of the Enlightenment. But prophecies of AI destruction are also spawning a new form of medieval obscurantism. It’s not an innocent.”

It is not necessary to agree with the content of the letter one hundred percent, much less sympathize with the objectives of the Future of Life Institute, to consider that it has given voice to a great concern generated to a large extent by the extraordinarily rapid and uncontrolled advances that have been taking place in recent years in the field of AI. The real dangers are not those of dystopian scenarios a la Terminator. There is no reason to accept that we are soon going to have super-intelligent machines that are going to take control of everything and, through malice or negligence, are going to end our species. It is not even necessary to believe that the AI ​​systems we have have real intelligence. Whether they are intelligent or not, they are already having worrying consequences that should be thought about.

Very disturbing examples are being seen of how, with their help, false news can be created very easily and realistically, with images and videos included, capable of deceiving the most seasoned. The proliferation of this type of news could give rise to a credibility crisis in the information media and significantly contribute to polarization and political instability, thereby weakening the foundations of democratic coexistence.

A particularly significant example is that of facial recognition systems. In China and Russia they are used to control political dissidents. The Chinese company Haiwei provides smart surveillance technology to more than fifty countries. As he points out Stephanie Hare in his book Technology is not neutral“in Russia authorities have been using Moscow’s network of facial recognition cameras to identify and detain people attending protests in support of Alexey Navalny. […] Some of the people who have been detained are journalists who attended the protests in a professional capacity, and the authorities are also investigating lawyers and doctors who provided professional assistance to opposition activists.”

Although the danger to the citizen is obviously much greater in societies ruled by authoritarian regimes, that does not mean that advanced democracies are free of danger. In the same book we are told, for example, about the situation in Great Britain, where shopping malls, town halls and even the metropolitan police make use of these systems.

In the governing bodies of the European Union there is an increasingly widespread conviction that the indiscriminate use of facial recognition systems in public places, even for police purposes, must be prohibited. The proposal for a regulation of the European Parliament and of the Council on the harmonization of artificial intelligence, published in 2021, already established that biometric data cannot be obtained without the consent of people, who must always be informed about it. And more recently, the document entitled “Guidelines on the use of facial recognition technology in the field of law enforcement”, published in 2022, establishes in its conclusions that “remote biometric identification of people in publicly accessible spaces it poses a high risk of intrusion into people’s private lives and has no place in a democratic society as by its nature it involves mass surveillance.” The document recommends its total ban.

To all this, let’s add the loss of privacy due to the commercial and even political use (remember the Facebook and Cambridge Analytica scandal) that technology companies make of our data. These companies are accumulating so much power that they strongly influence the decisions of many countries and can avoid paying taxes, thereby increasing their profits. Let’s also add the military use of AI in autonomous weapons. Or the weight that the decisions made by AI systems have in the lives of more and more people, for example, when it comes to being selected for a job or granting them a mortgage. Serious gender and race biases have been found in these systems, which has led to situations of injustice and even destroyed some lives, as he explained very well. Cathy O’Neil in his book “Weapons of Mathematical Destruction”. Let’s add the fraudulent uses of AI in finance or for the commission of common crimes, etc.

It is extremely unlikely that the moratorium called for in the open letter will be fulfilled. The opposing interests are very powerful. It is difficult for North American companies to stop researching AI when their main competitor, China, is not going to do so. I suppose that most of the signatories of the letter know this well. But what is important is that we have drawn the attention of the general public and (usually very misinformed) politicians to the significant risks we face, which call for greater consideration of AI governance and regulatory work. You have to put doors on the field (you can). And this does not imply giving up the beneficial effects of AI at all.

You may also like

Leave a Comment