Computers taught to turn thoughts into words

Being unable to speak might soon be a thing of the past, with computers now able to reconstruct words from thoughts alone.

In three recent experiments, teams of neuroscientists were able to decode brain signals into speech, Science magazine reports.

But it's not easy, requiring physical access to the brain itself. Two of the experiments focused on data recorded during treatment for epilepsy patients, which required electrodes to be implanted for days at a time, and the other during the removal of a brain tumour.

In one of the experiments, epilepsy patients were recorded telling stories and reading digits - both their voice and their brain activity. The data was fed into a type of computer system called a neural network, which - using the brain data alone, not the audio - reconstructed their voices. When the computers read out the digits, 75 percent of listeners were able to understand what it was saying.

In the second experiment with epilepsy patients, listeners were able to understand the computer 80 percent of the time when it read out sentences constructed using only data collected in the brain.

While the experiment using data from a patient underdoing the removal of a brain tumour only had a 40 percent success rate, that team set itself a much tougher challenge. The computer collected data from the patients as they mouthed single words, then was asked to reconstruct completely new sentences based on neural data alone.

"We are trying to work out the pattern of neurons that turn on and off at different time points, and infer the speech sound," Nima Mesgarani, a computer scientist at Columbia University, told Science. "The mapping from one to the other is not very straightforward."

Every person's voice is different too, so the neural network computer has to be trained on each person beforehand. At least for now.

The goal is for people who can't speak at all to have the ability to voice their thoughts without having to resort to gadgets - especially if they can't even do that, for example suffering from locked-in syndrome.

"It's really unclear how to do that at all, however," Gerwin Schalk, neuroengineer at the National Center for Adaptive Neurotechnologies told Science.

Brain signals for when a person speaks reportedly look different to their inner, private monologue, and people who can't talk might not be able to differentiate between the two - which obviously could cause problems.

Sound files for two of the experiments can be listened to here.

Newshub.