Scientists create a neural decoder that converts brain waves into words

A team of scientists from the University of California, San Francisco (UCSF) has developed a neural interface for decoding the commands the brain sends to the vocal tract. The technology enables angina patients who have been paralyzed for about 15 years to communicate, the study said.

The experts placed an array of thin, flexible electrodes on the surface of the volunteers’ cerebral cortex. The system records neural signals and sends the data to a speech decoder.

Scientists say this is the first time a paralyzed person who has lost the ability to speak has used neurotechnology to replicate entire words.

The system allows deciphering the user’s intent to use the vocal tract, which includes dozens of muscles that control the larynx, tongue and lips. Humans use relatively few basic configurations when speaking, the researchers said.

Schematic diagram of a brain-computer voice interface. Data: New England Journal of Medicine.

The scientists noted that at the start of the study, the team was faced with a lack of data on patterns that could explain the link between brain activity and the simplest components of speech — phonemes and syllables.

Therefore, they used information provided by volunteers at the UCSF Epilepsy Center. There, electrodes are surgically placed on the surface of the cerebral cortex before surgery to create a map of the areas involved during seizures.

Many of these patients participated in research experiments using recordings of their brain waves. Therefore, the experts asked volunteers to allow the study of neural activity patterns during the exchange process.

They recorded changes in respondents’ brain waves as they uttered simple words and sounds, and tracked the movements of their tongue and mouth.

Sometimes scientists draw a patient’s face in order to extract motion gestures using a computer vision system. They also used an ultrasound machine under the jaw to simulate the movement of the tongue in the mouth.

The team then matched neural patterns to muscle contractions. According to experts, there is a representative map of the different parts of the control vocal tract. They also found that during light exchange, different areas of the brain work together in a coordinated fashion.

The UCSF team has recruited two volunteers to test the system. In the future, they plan to increase the number of participants in the experiment, allowing them to communicate at a rate of 100 words per minute.

Recall that in May, the US startup Synchron launched a clinical trial of the Stentrode neural interface, designed to help paralyzed patients.

In January, scientists developed an artificial intelligence-powered eye implant that could restore sight to a nearly blind woman.

In August 2021, Synchron received FDA approval for human testing of the neural interface.

Subscribe to ForkLog News on Telegram: ForkLog AI – All news from the world of AI

Found an error in the text? Select it and press CTRL+ENTER

Source of information: compiled from FORKLOG by 0x information.The copyright belongs to the author Марина Глайборода, and may not be reproduced without permission

Related Posts

Relying on beauty e-commerce platform Tira to take on Nykaa

The platform is currently available to all Reliance employees; rolling out to consumers soon Tira has dedicated sections for make-up, skin and hair care, fragrance, bath, men’s grooming and luxury With Tira, Reliance Retail looks to fulfill its omnichannel retail ambitions Reliance…
Read More