In the future, dumb people can freely and clearly pronounce the words thanks to the device that turns their brain activity in the synthesized voice. Researchers from the University of California San Francisco, recently took a big step in improving this technology, allowing artificial intelligence to reproduce the voice reading thoughts, and analyzing lip movements of the person. The result was impressive — the synthesized voice you can listen to right now.
It is expected that the device will work when a person is mentally or physically reproduce the motion of the mouth, even if he will not give any sounds. To understand what the human brain aktiviziruyutsya under certain movements of the mouth, the researchers involved in the test of five volunteers. They had read little excerpts from children’s stories — in this process embedded in their brain electrodes to read their activity.
Eventually the researchers turned to two neural networks: the first coordinating brain signals with movements of the lips, and the second turned these movements into synthesized speech. Volunteers were able to repeat the sentence fragments the easy to recognize approximately 69% of the synthesized words. As in other studies, the shorter the proposals were, the more accurate was the result.
Researchers can improve the technology by using brain implants with a more dense arrangement of the electrodes and complex machine learning algorithms. Between different brain reaction study participants were found common features that suggests that future devices for speech synthesis can be easily customized to each person. The researchers also noticed that the artificial intelligence recognizes and sometimes not used when teaching sounds, which is also encouraging.
It is noteworthy that similar technology is already there, and they are also based on the work of artificial intelligence. To read about one of them in our material.