Does artificial intelligence have a “consciousness”?

by time news

“I can’t believe that we have so much to fight against ideas of science fiction”, thunders Sébastien Konieczny in the corridors of the Center for Computer Science Research in Lens. The reason for the annoyance of this expert in machine reasoning? A message from Ilya Sutskever, co-founder of the OpenAI company alongside Elon Musk, posted on Twitter: today’s largest neural networks might be “slightly aware”.

Dropped without explanation, the little phrase quickly created a stir. Machine learning specialists initially severely challenged the hypothesis of the existence of such advanced artificial intelligence (AI). But a professor from the Massachusetts Institute of Technology (MIT) put his foot in the dish again, taking offense that such an idea could be so violently banned. Because, according to him, it is not possible to distinguish the “non-conscious” models from the “slightly conscious” models.

So who to believe? Neural networks, some of which have as many parameters as there are nerve cells in the brains of small animals, would they have really reached a decisive stage, to the point of possessing a fraction of what characterizes us as humans? ? In appearance only, insist several French researchers. “We must first come back to the fuzzy notion of consciousness, tempers Jean-Gabriel Ganascia, professor at the Faculty of Sciences at Sorbonne University and author of the book Virtual easements (Threshold), to be released on March 4. We can imagine that a machine is able to reflect in the proper sense, that is to say to look at what it is doing and to improve its behavior accordingly. So AI can arguably tick some boxes, from a consciousness perspective.”

Limited offer. 2 months for 1€ without commitment

However, most of what characterizes us, such as our perception of ourselves or our emotions, are currently out of his reach. “Beyond the fact that the expression ‘slightly conscious’ means nothing, what would be the emotions of a machine, laughs the expert. Love electricity? Hate water?” Indeed, part of our emotions remains intimately linked to our survival. If the AI ​​had them, they would probably be very different from ours.

Imitate without understanding

“Let’s not forget that the controversial sentence comes from a person whose company markets neural networks”, warns Sébastien Konieczny. GPT-3, its flagship product, needs to be publicized. Certainly, its results are impressive: this system invents extremely worked stories from simple keywords. “Machines like GPT-3, or its French counterpart Bert, are now able to pass the Turing test, that is to say to deceive a human being for a few minutes. However, this test was imagined at the origin to attest to the intelligence of a machine”, agrees Jean-Gabriel Ganascia.

Nevertheless, neural networks remain in imitation. “If you ask them to produce a text, they do it, without more of their own will or conscience than a calculator”, asserts Sébastien Konieczny. General AI capable of reasoning or empathy would therefore only be a chimera?

“Only a minority of researchers work on this,” warns Konieczny. “The idea that the machine frees itself from its creator is not new. Transhumanists promise us a future in which it replaces humans or in which our personalities are downloaded into chips. But they don’t see any the way”, adds Jean-Gabriel Ganascia. Moreover, the Human Brain Project, which aims to simulate the entire functioning of the brain, in order to eventually allow an artificial consciousness to emerge, has hardly given any convincing results for the moment.

A decisive ability to communicate

The researchers are still advancing a few ideas to make AI more efficient and useful. “The big question is whether we persist only with deep learning, which is one of the ways of doing artificial intelligence, or whether we integrate other methods”, summarizes Sébastien Konieczny. In the future, several neural networks may work together to mimic different specialized areas of the brain. But it will be necessary to teach this kind of system how to explain its decisions.

“Currently, neural networks are like black boxes. They are not able to argue. So we are forced to adopt or reject their suggestions, as if we were before an oracle, explains Jean-Gabriel Ganascia: In the future, the networks may tell us: ‘That’s the reason I came to this conclusion, it’s because there is this or that in the data.’ But the result could be very variable.”

This ability to communicate will be decisive, underlines the researcher. Because already, algorithms are taking up more and more space in our daily lives. Some give out loans. Others play a role in court decisions. Integrating knowledge into AI models also seems essential so that they respect the rules, or make the right ethical choices.

L’application L’Express

To track analysis and decryption wherever you are

Download the app

Download the app

“We can imagine a combination of two systems. The first, faster and more intuitive, would be based on neural networks while the second, more analytical, would be based on so-called symbolic AI, that is to say a set rules and knowledge. This would make it possible to get closer to the decision-making process in humans, which alternately gives priority to instinct or reflection, imagines Sébastien Konieczny. Nothing magic, therefore. Consciousness, for the hour, keep its mysteries.


Opinions

Chronic

Entrance to the Auschwitz camp on June 25, 2015Christopher Donner

Chronic

The list of the twelve candidates for the presidential election of 2022 is known.By Sylvain Fort

Chronic

Anne Hidalgo and Yannick Jadot are both targeting the 2022 presidential electionBy Gael Brustier

Chronic

A growing number of scientists and entrepreneurs are trying to build tools from blockchain technology (Illustrative photo).Robin Rivaton

You may also like

Leave a Comment