Brain-computer interface must end isolation of patients with Locked-in syndrome

by time news

Locked-in syndrome is a rare neurological disorder. It is often caused by amyotrophic lateral sclerosis (ALS), an incurable degenerative disease of the motor nervous system. Affected individuals are at risk of losing complete control of their muscles, while consciousness and mental functions remain intact. This means that those affected can see and hear, but usually only the eyelids are left as a means of communication. A decisive facilitation of communication with locked-in patients is expected from brain-computer interface (BCI) technologies. These are based on the finding that even imagining a behavior produces measurable changes in brain electrical activity. For example, imagining a hand or foot being moved triggers activation of the motor cortex.

Brain-computer interface technologies are divided into invasive and non-invasive methods. In non-invasive methods, brain activity is measured with electrodes that are manually applied to the scalp. The measurements are based on electroencephalography (EEG), which has the disadvantage of low signal resolution and limited accuracy.

Invasive technologies for brain-computer interfaces

The implantable technology will be able to decode speech in real time from brain signals.

Professor Gernot Müller-Putz.

Invasive methods can compensate for these weaknesses by using electrodes for electrocorticogram measurements, which are implanted above the motor cortex. To date, the applicability of invasive brain-computer interface technologies still lacks the desired miniaturization and high spatial resolution, because this requires a large number of measurement points in a small space. Moreover, the software cannot yet be applied by the test subjects alone. For both the EEG-based and intracortical systems, repeated calibration has to be performed to bring the algorithms back up to the current state of the art, explains Professor Gernot Müller-Putz of the Institute of Neurotechnology at the Technical University of Graz, Austria, from.

He is currently conducting research in the European research consortium INTRECOM, which aims to solve these problems. The implantable technology will be able to decode speech in real time from brain signals. For the first time, implanted patients will have a complete and easy-to-use communication system that allows them to speak and control a computer cursor.

Decoding articulator movements for brain-computer interfaces (c) University Medical Center Utrecht / RIBS

Utrecht

The consortium of research and industry partners is led by Professor Nick Ramsey of the University Medical Center UMC Utrecht. He has already shown in preliminary work that an attempted hand movement can be detected and used as a mouse click. This works similar to assistive technology, where individual letters are scanned and the patient can select and click letters, explains Professor Müller-Putz.

He himself has just completed the EU project Feel Your Reach, in which he was able to calculate the trajectories of presented arm movements with a certain probability from EEG signals. This technology will be further refined in the current project. At the Technical University of Graz, Austria, the focus has so far been on non-invasive brain-computer interface technologies. Together with Professor Ramsey, Müller-Putz is now working for the first time with measurements of the electrocorticogram (ECoG). The material on which the electrodes are attached – the so-called array – rests directly on the motor cortex.

Two research approaches

To safely move forward with the research, the research partners are taking two approaches: Team Ramsey wants to generate speech from speech attempts, meaning the researchers evaluate the person’s attempt to produce the individual sounds of a spoken word. In this way, they can read in real time what the person is trying to say from the brain signals.

The Müller-Putz team focuses on any additional form of communication that can be described using cursor movements, from simply selecting icons on the screen to cursor movements and choices that the patient can control.

The hardware of the Brain-Computer Interface consists of a series of electrodes – called an array – and a biosignal amplifier. While the array of electrodes is placed over the motor areas, the biosignal amplifier is implanted in the skull bone. The latter is tasked with processing the data and transmitting it wirelessly to remote computers for analysis and decoding.

Miniaturization vs. high resolution

One of the technical challenges is the aforementioned miniaturization, which is a prerequisite for implantation. High spatial resolution is required when recording brain signals. That means a very large number of measuring points in relation to the size of the array. The smaller the array, the closer the electrodes must be arranged. The temporal resolution is measured in the millisecond range. Both high spatial and high temporal resolution are fundamental for real-time speech decoding.

To convert the brain signals into spoken words, algorithms are used to extract parameters from the measurement data. These describe whether the mouth wants to produce sounds or whether the hand wants to move the cursor. Ultimately, the system has yet to be embedded in software that works in a home application without technical experts. To this end, the system must be user-friendly and robust and use the latest AI-based and machine learning technologies.

Industrial partners

Two industrial partners in the consortium are responsible for designing the hardware: the Swiss-based Wyss Center for Bio- and Neuroengineering will design the biosignal amplifier, and the German medical device manufacturer CorTec will develop parts of the implantable electronics that record the brain signals: custom high resolution ECoG electrodes with high channel wiring.

“The individual components already exist in different designs. We are now going to refine them and for the first time bring different things together so that we can implement them properly. That’s the exciting part,” says Müller-Putz. The brain-computer interface will be tested on two people with the Locked-in syndrome in Utrecht and in Graz.

About the INTRECOM project

The project will start in the fall. Professor Müller-Putz is currently preparing and is still looking for interested postdocs and PhD students to join the team at the Institute of Neurotechnology at Graz University of Technology, Austria. Intracranial Neuro Telemetry to REstore COMmunication (INTRECOM) has been selected by the European Innovation Council (Pathfinder Programme) and is funded by the EU with almost four million euros. The project will run from fall 2022 to fall 2026.

You may also like

Leave a Comment