Artificial intelligence reads minds – Hi-Tech – Kommersant

Artificial intelligence (AI) can be used not only in data analysis, face recognition and many other processes, but also to work with more subtle matters. This is, for example, recognizing emotions or analyzing brain signals to help paralyzed people write text. However, many scientists question the effectiveness of such systems, and also question the ethics of collecting and using data on people’s emotions.

AI for mind reading

Scientists and developers are creating increasingly complex neurocomputer interfaces – systems designed to exchange information between the human brain and a computer. One of the latest developments of this kind was a project by scientists at Stanford University in California: they created an AI that can interpret the signals from the human brain, representing how he writes various words with a pen, and translate them into text. As conceived by the developers, such a system can be used by paralyzed people: with its help they will be able to type texts.

In this case, we are talking about an artificial neural network: scientists have implanted two sensor matrices in the brain of a paralyzed 65-year-old American, each matrix receives signals from about 100 brain neurons from about 100 billion neurons in the human brain. When a person imagines how he writes text with a pen on paper, brain signals enter an artificial neural network, algorithms analyze them and produce the corresponding letters and words.

As the authors of the study note, despite the fact that most human movements involve thousands and millions of neurons, the signals from 200 neurons are enough to determine exactly what a person wants to write. The complexity of creating such a system is that it is impossible to use machine learning on huge data sets, as is done in other cases, since such data sets simply do not exist – otherwise the people participating in the project would have to mentally write thousands and tens of thousands of texts … Instead, the neural network is taught using examples of brain signals when a specific person spelled out individual letters.

A paralyzed patient imagines that he is writing letters of the alphabet, sensors implanted in his brain receive signals, and AI algorithms display the corresponding letters on the screen

Фото: Frank Willett / Stanford Medicine

This method allows you to write 90 characters per minute, that is, almost like when printing on a smartphone, where a person types an average of 115 characters per minute, or when writing text by hand (about 120 characters per minute). The accuracy of such a set is 94.1%, and when the auto-corrector is connected, it is more than 99%. So far, scientists have not created a universal system that could thus help various paralyzed people write, but they continue to work in this direction.

Moreover, this method is faster and simpler than other similar systems based on tracking eye or head movements. “If you use eye tracking to work on your computer, your eyes are tied to what you are doing. You cannot look up, look back, or do anything else. Having such an additional input can be really important, ”said one of the study participants, professor of neurosurgery and neurology at Stanford University Jamie Henderson.

Another similar, albeit smaller-scale development is a study by scientists from the Finnish Aalto University, who created an AI system that learned to reproduce how a person types on a keyboard, including errors and typing features. “There are some choices we make, so it seems like the human brain is optimizing the process when we type. I wanted to do the same using computer software, then optimize it and see if it behaves like a human, ”says researcher Jussi Jokinen.

Scientists have created such a system by applying knowledge about human behavior when typing text on a smartphone. As a result, the created system typed text at a speed close to that of a person typing; the number of errors and corrections was also similar. The goal of this AI is to be able to quickly test new interfaces, keyboards, etc. “I hope designers can use this tool to quickly evaluate their ideas and test on models how users would type if they were given this keyboard.” , Says Mr. Jokinen.

AI that determines emotions

Another promising area of ​​application of AI is using it to recognize the emotions of people. One of these AI systems is software developed by the Hong Kong company Find Solution AI called 4 Little Trees. This system, based on the use of artificial intelligence, can detect emotions such as joy, sadness, anger, fear, surprise, fatigue, stress based on facial expression, micromovements, voice, eye movement and other parameters, as well as notice a decrease in concentration and attention. In fact, such a system is an extended version of systems for face recognition, which, however, recognizes not just facial features, but highlights various signs of different emotions.

4 Little Trees software developed by Hong Kong-based company Find Solution AI to recognize human emotions

4 Little Trees software developed by Hong Kong-based company Find Solution AI to recognize human emotions

Photo: Find Solution AI / youtube.com

Last year, when schools switched to distance learning during the pandemic, 4 Little Trees gained popularity in Hong Kong schools, with 84 local schools already using the software. The app regularly sends reports to teachers about the emotional state of students and sends warnings to the students themselves if it sees that they have become less attentive.

“During the pandemic, tech companies are promoting their emotion recognition software as a way to remotely monitor workers and students. Similar tools are advertised as being able to monitor employees when working remotely and are already being used in remote interviews. Emotion recognition will have a big impact in the world, from workplaces and schools to public spaces, ”said Keith Crawford, co-founder of the AI ​​Now Institute at New York University that studies the social consequences of AI adoption.

Many companies around the world are involved in the development of emotion recognition, including Amazon, Microsoft and Google. Sometimes such systems are used in rather unexpected ways – for example, Disney used such software to analyze the reactions of viewers to various films it produces, from Star Wars to Zootopia. American marketing company Kantar Millward Brown has similarly used AI to determine how people feel when they see ads for Coca-Cola, Intel, or other brands. Automakers, including Ford, BMW and Kia, in turn are working on systems that could assess whether the driver’s alertness has decreased. Such systems are especially popular with recruiting companies: such software is used, among other things, by the American HireVue and the British Human. They are interested in such developments and governments that want to more effectively identify suspicious persons. In general, the market for emotion recognition systems, according to the estimates of the research company Markets and Markets, is estimated at $ 19.5 billion as of 2020 – and by 2026 it should double, to $ 37.1 billion.

Factorized variational autoencoders for modeling audience response to films

Factorized variational autoencoders for modeling audience response to films

Photo: Disney Research

As with facial recognition software, there are many questions about emotion recognition systems – maybe even more. Many scientists doubt that the software that exists today can correctly recognize emotions. Since 2019, a group of scientists from the American Psychology Association studied more than a thousand studies on this topic, in the end they came to the conclusion that emotions are expressed in different people in many different ways, which makes it difficult to create a system that would accurately determine certain emotions. “It is impossible to conclude with certainty about happiness from a smile, about anger – from a frowning look, about sadness – from frowning eyebrows, as most current technologies are trying to do, using what is mistakenly considered scientific facts,” they say.

The situation is complicated by the fact that in different regions and cultures the same emotions can manifest themselves in different ways, more or less pronounced, etc.

There are also doubts about this technology in terms of the confidentiality of information, as well as the general ethics of collecting and using such data about the emotions of other people. “Having studied the history and the shaky scientific basis of such instruments, I am convinced that they need to be tightly regulated. In many cases, we cannot fully know how many companies use these tools, as they are often used without due transparency and without informed consent, ”said Ms. Crawford.

Yana Rozhdestvenskaya

.

Facebook
Twitter
LinkedIn
Pinterest
Pocket
WhatsApp

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Recent News

Editor's Pick