A new artificial intelligence system can convert signals from the human brain into text with an accuracy of up to 97%.
In the past few years, we have become accustomed to the technology of converting words into text thanks to the virtual assistants of Amazon or Google. But now, humanity has yet another impressive milestone in its development history with technology that converts thought into text .
In the process of decoding the electrocardiogram, a research team at the lab of Dr. Edward Chang, University of California San Francisco (UCSF) tried to record the signal emitted from the patient's brain.
Doctors ask the patient to read and repeat a number of statements, and use electrodes to record the signal and brain activity at that time.
Signals from the patient's cerebral cortex are recorded by electrical impulses. (Photo: mc.ai).
From the data collected, this group analyzes brain signals that correspond to certain voice signals: vowels, consonants, mouth movements, etc.
They then used another artificial nervous system to decode these signals back into text. The experiment database consisted of only about 30-50 sentences. The system will try to predict and organize text based on the signals of the cerebral cortex.
At best condition, this conversion system only errors about 3%. In terms of experimental conditions, this is almost reading the minds of others. However, the team also pointed out that the system's inaccurate predictions differ from the misleading we hear by the human ear.
For example, 'The museum hires musicians every evening' is predicted to be 'The museum hires musicians every expensive morning'.
Or 'Part of the cake was eaten by the dog' is predicted to be 'Part of the cake was the cookie'.
Artificial intelligence can recognize voice signals from the brain and translate them into text. (Image: Andrew Ostrovsky / Getty Images).
In the most erroneous cases, errors in the output text are almost irrelevant or phonetically related to the original word.
Although there are still many limitations, this system has the potential to develop a new direction for decoding brain activity based on artificial intelligence to express human speech (with a 5% error rate). .
Of course this comparison is not really fair. People convey words using tens of thousands of words. In contrast, this system only has to learn the signals of about 250 vocabulary words with a short and limited set of sentences.
The team thinks that in the future, the system will play an important role in supporting patients who are unable to talk.