Data Science Investment Counter —Funding raised by UK data science companies in 2018.
£ 5.640 Million

Scientists create AI-powered ‘brain decoder’ that turns brain signals into speech

Neuroscientists at the University of California in San Francisco have designed a device that can transform brain signals into speech.

The new technology, which findings have been published in a study published on Wednesday in the scientific journal Nature, has been developed in order to help people who have lost the ability to speak to regain a way to communicate.

Although not yet ready for use outside the lab, the software can already synthesise sentences that are almost fully comprehensible by monitoring brain regions that are active during conscious vocalisations. However, this also means it will not be able to vocalise unspoken thoughts.

While not having been tested in patients who don’t have the ability to speak, the program has been trialled by researchers who hooked electrodes directly up to the brains of five people as they read sentences aloud.

The data was then fed to a computer program that matched patterns in the brain’s signals with the vocal movements the words would produce if spoken. Ultimately, the algorithm turned the movements into fast-paced synthetic speech.

This is not the first time artificial intelligence is used to translate brain signals into speech. However, previous attempts at doing so have focused on translating single syllables.

“Making the leap from single syllables to sentences is technically quite challenging and is one of the things that makes the current work so impressive”, said Chethan Pandarinath, a neuroengineer at Emory University in Atlanta, Georgia, who co-wrote a commentary accompanying the study.

The sounds produced by the program were transcribed by hundreds of listeners, who were able to understand, on average, 70 per cent of the words spoken.

However, the results varied consistently depending on the difficulty of the syllables and words in each sentence. Some phrases were transcribed perfectly every time, while others were almost completely incomprehensible.

All in all, there’s a lot to address, Marc Slutzky, a neurologist at Northwestern University, told Nature News. The study still represents “a really important step,” he said, “but there’s still a long way to go before synthesised speech is easily intelligible.”

Image via PXhere


Co-working space and blog dedicated to all things data science.

Subscribe to our newsletter