New AI system could help people who lost their ability to speak

Researchers have developed a non-invasive language decoder that can reconstruct speech from functional MRI data — sparking hope this technology can one day help people who have lost their ability to speak due to injuries like strokes or diseases like ALS.

"Currently, language encoding is done using implanting devices that require neurosurgery. Our study is the first to decode continuously, meaning more than single words or sentences, from non-invasive brain recordings, which we collect using functional MRI," Jerry Tang, one of the researchers, said during a press conference about the study, the findings of which were published Monday in Nature Neuroscience. 

A press release about the study describes it as a "new artificial intelligence system called a semantic decoder."

Other decoders that have used non-invasive brain activity recordings were limited to decoding single words or short phrases, while invasive versions require steps like implanted electrodes in the brain.

In the newly developed decoder, speech reconstructions aren't word-for-word, but can recover the "gist" of what the user is hearing.

"In one example, the user heard the words, 'I don't have my driver's license yet.' And the decoder predicted the words, 'She has not even started to learn to try it yet,'" Tang said. "We found that the decoder was also able to recover the gist of what the user was imagining or seeing."

Next steps include making this approach more practical, as fMRIs aren't exactly portable since they require someone to lay in a machine that uses a large, expensive magnet to get readings. 

"We think of this work as a proof of concept, that language can be decoded from non invasive recordings," Tang said. "So moving forward, we want to see if our approach works with recordings from cheaper or more portable devices."

Mental privacy was another important aspect of the study.

"Nobody's brain should be decoded without their cooperation, so we tested this in our study," Tang said, adding they found someone's cooperation is necessary both to train and run the decoder. "We found that you can't train a decoder on one person and then run it on a different person, so decoding requires training data from the specific user."

This could change, however, as technology improves, making privacy an important factor in future research.

"It's important to keep researching the privacy implications of brain coding and enact policies that protect each person's mental privacy," Tang said. 

f

We and our partners use cookies to understand how you use our site, improve your experience and serve you personalized content and advertising. Read about how we use cookies in our cookie policy and how you can control them by clicking Manage Settings. By continuing to use this site, you accept these cookies.