The Spaniard in Korea who ‘copies’ the brain to improve artificial intelligence

by time news

Suddenly… ¡chas! One sound and your brain kicks in: identify a snap, close behind the head, possibly fingers snapping together. It analyzes everything practically instantly, without having to set bits, ones and zeros or anything like that. I eat? That’s what you’re studying Miguel Sanchez-Valpuestaa 33-year-old Spanish man who works at the Korean Brain Research Institute (KBRI) with the aim of applying brain processes towards an improvement in the architecture of artificial intelligence (AI) and making it faster, more efficient and more sustainable’ copying’, as far as possible, the innovation that the human being hides inside his head.

“The goal is not to copy nature as it is, but to learn from something that is already beautiful and try to develop new things. In general we know little about how our brain works, or even how we think, but it is fascinating to try to discover it. How, only With the vibration of two membranes in our ears, we can locate where something we are not seeing or touching is in three dimensions How do we differentiate different voices speaking at the same time Through calculations and predictions that occur from instantaneously in our heads and that they are the result of infinitely more advanced processing than that of current computers”, the young man from Barcelona explained to EL PERIÓDICO DE ESPAÑA from the South Korean city of Daegu, where he has been working for two years.

The challenge for those who, like him, want to discover the secrets of the brain, is precisely to understand how procedures developed over millions of years of evolution are articulated and how they can be applied to improve current technology.

Over the next few years, he explains, various artificial intelligence models will appear, from those who will bet on a traditional computing system to those who will aspire to copy the model through so-called neuromorphic computing.

Miguel, who is one of the international experts at the Hermes Institute and has a master’s degree in biomedicine from the University of Barcelona, ​​did a doctorate for six years in Japan to study the mechanisms and neural circuits of language learning. From there he made the leap to Korea, where this study of the way in which the auditory circuits interact with the motor brain could improve an artificial intelligence that has emerged in 2022 as one of the technological advances that will mark the next decade.

“Just as we see with the brain, since the image it presents to us is not the one our eyes receive, we also listen with it. The brain ‘invents’ and ‘fills in’so to speak, much of what we perceive based on predictions and previous experience, and continually maintains a representation of the external world that is selectively ‘updated’ by sensory input. That is why there are also phantom sound pathologies that the ear is not perceiving but that are in our head. This type of knowledge allows us to understand how sound travels through our neurons, and, therefore, a little more about how this processing is”, explains Miguel.

The human brain, however, is not limited to ‘filling in’, it has extraordinary adaptability when it comes to sound. For example, if we receive a sound at 60 decibels instead of 30, it sounds a little louder in our head, but, as the young man from Barcelona explains, in reality it should sound 1,000 times stronger. “The same thing happens with our own voice or when we exercise. You don’t hear the sounds you generate as loud because your brain muffles them, it’s an involuntary survival mechanism,” she points out.

If these processes could be synthesized, for example, the ability of autonomous vehicles to adapt to new situations and unexpected stimuli could be improved. It is just an example, because they have already improved, but before, a car could be prepared to identify a child, an adult or a dog and not run over it, but what would happen if a wild boar or a scooter came across? The AI ​​had to be able to process this new element for which it was not prepared and judge its relevance instantlyno time for traditional processing.

“Until now artificial intelligence works, so to speak, with classifiers. For A, B or C it gives more or less standardized answers, and that was what we thought was happening with the brain, but it is not true, most of what does not rely on stimulus and response. 80% of brain activityfor example, consists of maintaining an internal representation of what happens around us”, he points out.

AI SUSTAINABILITY IS IN THE BRAIN

However, even if it were possible to discover the way to emulate all this through a conventional computer, it would be practically impossible and impractical. “Even super computers do step-by-step algorithmic operations, which slows down processes and wastes an enormous amount of energy. quantum computerson the other hand, do not perform algorithmic operations, and neither do our brains, although they have other limitations today,” explains the engineer.

In this way, he says, right now computing is done “little by little”, that is, “even when we simulate neural circuits we do it as if it were software on a CPU following algorithmic rules that are completely unrelated to the functioning of the brain” . It is at this point that the neuromorphic computinga field that aspires to replicate, both in chips and in processes, the way in which human thought is articulated.

Right now, only in South Korea, where Miguel works, and in Taiwan are they capable of manufacturing the most advanced chips in this field, with all that this implies for the race for technological development between America, Europe and Asia.

In addition, it is estimated that today 3% of all the electricity used in the world is consumed in data centers, and that, in 2030, this percentage could reach 13%. Optimizing computational models, as the brain has done over millions of years of evolution, would make artificial intelligence a much more sustainable technology.

You may also like

Leave a Comment