Exploring the Use of Brain Organoids as Biological Neural Networks

by time news

2024-01-21 16:00:00

Virgil Ulam was a brilliant and groundbreaking scientist, but also reckless. So reckless, that he injected into his body what was the highlight of his research: genetically engineered white blood cells that allow them to perform complex calculations and learn on their own. At first he enjoyed a host of physiological and mental upgrades, but the smart cells weren’t satisfied with that, and things got out of control.

This is the essence of the plot of the award-winning story “blood music” (1983), by science fiction writer Greg Bear, who was one of the first to blur the lines between biology and technology, even if only in a fictional way. Now, forty years later, Bear’s ideas are no longer entirely fiction, but touch the field The “possible” of science. Recently, researchers succeeded in producing biological tissues outside the body that are capable of learning and solving complicated computational problems. Before we delve into the sophisticated system they created, it is better that we stop for a moment and reflect – why would we even want to perform calculations in living tissues. What is wrong with us in ordinary computers?

The “memory wall” means that even powerful and fast processors are limited in their ability to realize their inherent potential. The motherboard in the computer Karlinsky Tzur spring using Midgerni

I’m off the hook

Breakthrough technologies that came into our lives by storm a little over a year ago, such as Chat-GPT or the Midjourney image creation platform are based on artificial neural networks. These are mathematical models that draw inspiration from the way the brain processes information: unlike more traditional software, all the actions they perform are predetermined by the programmers, artificial neural networks are able to change the way they process information following feedback. When the software fails at a task, it will change the way its components communicate information to each other, and repeat the task over and over again until it receives positive feedback.

The power of them Sophisticated artificial networks lies in their structure, which is based on the networks of nerve cells in the brain. Neurons communicate with each other through electrical-chemical signals that they transmit to each other at connection points, or junctions, called synapses. The synapses allow the cell to receive many messages coming from neurons located in different information processing centers in the brain, to weigh the signals received from many cells at the same time, and to decide whether to pass the message on. When we learn something new, new synapses are formed and allow the brain to process the new information more accurately and faster.

Although we tend to be impressed by the computerized systems, our private biological neural network – the brain – allows us to process unimaginable amounts of information at any given moment. It is enough to look at the space around you to appreciate how wonderful it is: the multitude of angles and shades that stimulate our retina are translated into meaningful objects – which we are able to recognize and manipulate almost immediately.

One of the reasons why a computer cannot process such a large amount of information at a similar speed is the architectural separation between the computer’s memory and processor. In every operation performed by the computer, information has to pass repeatedly between these components, which slows it down a lot. This problem is known as a bottleneck Von Neumannor the “memory wall”, and it means that even powerful and very fast processors are limited in their ability to realize the potential inherent in them.

In a new study Researchers from the Indiana University Bloomington tried a creative way to bypass the computer’s limitations: they created a new computational system called Brainoware, or hardware (brain + hardware) in Hebrew translation. Even in the hardware system, data is processed using neural networks; But instead of creating artificial networks, they used the real thing – biological neural networks.

The researchers grew human brain organoids, that is, tiny 3D organs that are grown in the laboratory and represent the entire organ. To this end, they grew human stem cells – cells that have not yet determined their final function and can become all types of cells that a person has – and created for them the biological conditions that would cause them to become brain cells. In a short time, mature nerve cells capable of sending electrical signals, and cells that support their activity, appeared in the organoids.

The organoids were placed on a substrate of tiny, dense electrodes that recorded their electrical activity. The researchers translated the recorded signals into a kind of activity maps that make it possible to illustrate when and on which electrode the activity was recorded.

The brain, our biological neural network, allows us to process unimaginable amounts of information at any given moment. Neural network in the brain Romanova Natali, Shutterstock

A school for tiny minds

The electrode pad not only recorded the electrical activity of the organoids, but also allowed the researchers to transmit electrical messages to them. In this way, they could test if the organoids respond to stimuli, and if they are able to solve difficult problems and improve their performance similar to artificial nervous systems. And between us, what could be more difficult than recognizing Japanese syllables?

The researchers took a database of 240 short recordings of phonetic movements in Japanese, from eight different speakers. They translated the sounds into a series of electrical signals that the tiny brains could understand, just as the ear translates sound waves into electrical signals and allows the brain to process auditory information.

The researchers trained the organoids for two days, during which they were exposed to recordings twice a day. They also examined the electrical signals produced by the organoids in response to each speaker. The more unique the signals were to each speaker, the better and more accurate their performance was considered. Before they started training the organoids, the researchers showed that they had a natural ability to distinguish between speakers, but their level of accuracy was low – only slightly better than absolute guesswork. After the last four lessons, the organoids were already able to distinguish between the speakers with an accuracy level of about 80 percent.

Matriculation five units in mathematics

Now the researchers decided that recognizing Japanese syllables is still an easy task, and wanted to know if their organoids are also sophisticated enough to process information that seems seemingly chaotic and random, but is actually based on complex mathematical series. These mathematical systems are called the Hénon map, and now the organoids were asked to identify and observe them.

To understand the complexity of the Henon map, imagine looking through a kaleidoscope. Every time you turn it, the color and shape segments change slightly and create a new, varied and colorful geometric pattern. The patterns do look random, but they are not: they are the product of a clear and constant order of the colored pieces in the kaleidoscope. To predict Henon maps, the organoids need to show that they are able to anticipate what the apparently chaotic picture will look like when we keep rotating the kaleidoscope.

The researchers translated such henon maps into series of electrical signals, and exposed the organoids to a series of Morse code-like signals, which allowed the organoids to learn how the mathematical series behaved. Going back to the example of the kaleidoscope, one can imagine that they let the organoids observe the geometric shapes while rotating the axis. After several rounds of training, they exposed the organoids to a series of signals representing a section of the henon map, and tested their ability to anticipate the continuation of the series and produce corresponding electrical signals. This time, too, the performance of the organoids was mediocre at first, predicting the mathematical series with an accuracy of about 35 percent, but after a few training sessions they were already able to perform the task with an accuracy of about 80 percent.

The researchers exposed the organoids to a series of Morse code-like signals, which allowed them to learn how a mathematical sequence behaves. Brain organoid in early stages. Cell nuclei in red, cell bodies in green Photo: Maayan Karlinsky Tzur

Did the organoids really “learn”?

It is difficult to understand what exactly caused these tiny tissues to improve their performance following training. Is this real learning?

When humans repeat a task over and over, the neurons change, and new synapses connect neurons that previously did not communicate with each other directly. The flexibility of the brain allows us to change and improve quickly. Using high-resolution photographs, the researchers identified a greater increase in the amount of connections between nerve cells in organoids that underwent training, compared to the amount of these connections in organoids that were not trained.

To make sure that there is a connection between the improved connectivity and the learning of the hardware, the researchers injected some of the organoids with a substance that prevents the activity of synapses, and does not allow them to form new connections between nerve cells. The assumption was that if the organoids improved in predicting the HNon maps despite the neutralization of the synapses, they would be able to conclude that the improvement in the performance of the organoids is not related to the increasing connectivity between their neurons. But in practice, blocking the synapses prevented learning, and the organoids exposed to the substance continued to perform the task with a very low level of accuracy even after the training series.

The findings of the study do not eliminate its limitations. Similar to other studies that use organoids, here too a high rate of dead cells was detected in the center of the tissue, due to a lack of nutrients and oxygen deep in the organoid tissue in the absence of an active blood system that would nourish it. When many cells die deep in the tissue, it can be expected that the learning process of the neural networks will be damaged and limited in scope.

The accuracy of the hardware was also lower than what can be achieved with artificial neural networks. Although the organoids needed fewer learning sessions than the artificial networks to improve their performance, the duration of each training session was many times longer than it takes for a computer to complete dozens of training sessions.

And finally, the researchers hoped that the new technology would be more environmentally friendly. The hardware is supposed to save a lot of energy, since the brain spends very little energy to process huge amounts of information. But in practice any savings that might have been achieved were nothing compared to the energy required to grow the organoids, using sophisticated incubators, many chemical substances, an electronic system and connection to computerized systems to translate their activity.

For now, we have to wait for more research to see how this technology can be used by us to bypass the memory wall, or reduce the energy needed for future computers. Until then, it is better that no scientist decides to inject himself with the hardware, if only because of the slim chance that such a step would endanger the entire human race, as Greg Barr predicted in his book.

#Siri #connected #tiny #minds

You may also like

Leave a Comment