Brain-machine interface device predicts internal speech

Summary: A new brain-machine interface is the most accurate yet for predicting a person’s internal monologue. Technology could be used to help people with speech disabilities communicate effectively.

Source: CalTech

New research from Caltech shows how devices implanted in people’s brains, called brain-machine interfaces (BMIs), could one day help patients who have lost their ability to speak.

In a new study presented at the Society for Neuroscience 2022 conference in San Diego, researchers demonstrated that they could use BMI to accurately predict which words a quadriplegic participant was simply thinking and not speaking or miming.

“You may have seen videos of people with quadriplegia using BMIs to control robotic arms and hands, such as grabbing a bottle and drinking from it or eating a piece of chocolate,” says graduate student Sarah Wandelt. Caltech at the laboratory of Richard Andersen, James G. Boswell Professor of Neuroscience and director of the Tianqiao and Chrissy Chen Brain-Machine Interface Center at Caltech.

“These new results are promising in the fields of language and communication. We used a BMI to reconstruct speech,” says Wandelt, who presented the results at the Nov. 13 conference.

Previous studies have successfully predicted participants’ speech by analyzing brain signals recorded in motor areas when a participant whispers or mimes words. But predicting what someone is thinking, the internal dialogue, is much harder because it doesn’t involve movement, Wandelt explains.

“In the past, algorithms that attempted to predict internal speech were only able to predict three or four words and with low or no real-time accuracy,” Wandelt says.

The new research is the most accurate yet for predicting internal words. In this case, brain signals were recorded from single neurons in an area of ​​the brain called the supramarginal gyrus, located in the posterior parietal cortex. Researchers had found in a previous study that this area of ​​the brain represents spoken words.

Now the team has extended their findings to internal discourse. In the study, the researchers first trained the IMC device to recognize brain patterns produced when certain words were internally spoken, or thought about, by the quadriplegic participant. This training period lasted approximately 15 minutes.

Previous studies have successfully predicted participants’ speech by analyzing brain signals recorded in motor areas when a participant whispers or mimes words. Image is in public domain

They then flashed a word on a screen and asked the participant to say the word internally. The results showed that the BMI algorithms were able to predict eight words with up to 91% accuracy.

The work is still preliminary but could help patients with brain damage, paralysis or conditions such as amyotrophic lateral sclerosis (ALS) that affect speech.

“Neurological disorders can lead to complete paralysis of voluntary muscles, rendering patients unable to speak or move, but they are still able to think and reason. For this population, an internal speech BMI would be incredibly helpful,” says Wandelt.

“We’ve already shown that we can decode imaginary hand shapes to grasp from the human supramarginal gyrus,” says Andersen. “Being able to also decode speech from this area suggests that an implant can recover two important human abilities: grasping and speech.”

The researchers also point out that BMIs cannot be used to read people’s minds; the device should be trained separately in each person’s brain, and they only work when a person concentrates on the word.

Besides Wandelt and Andersen, other authors of the Caltech study include David Bjanes, Kelsie Pejsa, Brian Lee and Charles Liu. Lee and Liu are Caltech visiting associates who serve on the faculty of USC’s Keck School of Medicine.

About this neurotechnology research news

Author: Whitney Clavin
Source: CalTech
Contact: Whitney Clavin – CalTech
Image: Image is in public domain

Original research: Access closed.
“On-line internal speech decoding from single neurons in a human participant” by Sarah Wandelt et al. MedRxiv

See also

This shows a woman rubbing her shoulder

Summary

Online internal speech decoding from single neurons in a human participant

Speech brain-machine interfaces (BMIs) translate brain signals into words or audio outputs, enabling communication for people who have lost their speech abilities due to illness or injury.

Although significant progress has been made in decoding vocalized, attempted, and mimed speech, results from decoding internal speech are sparse and have not yet achieved high functionality. In particular, we still do not know from which areas of the brain internal speech can be decoded.

In this work, a quadriplegic participant with implanted microelectrode arrays located in the supramarginal gyrus (SMG) and primary somatosensory cortex (S1) performed internal and vocalized speech of six words and two pseudowords.

We found robust internal speech decoding from single SMG neuron activity, achieving up to 91% classification accuracy during an online task (12.5% ​​chance level).

Evidence for shared neural representations between internal speech, word reading, and vocalized speech processes has been found. SMG represented words in different languages ​​(English/Spanish) as well as pseudo-words, providing evidence for phonetic encoding.

Moreover, our decoder obtained a high classification with multiple internal speech strategies (auditory imagination/visual imagination). Activity in S1 was modulated by vocalized speech but not by internal speech, suggesting that no articulatory movement of the vocal tract occurred during internal speech production.

This work represents the first proof of concept for a performing internal voice IMC.