Brain Activity translated into Image? The future is here

Figure 1. Each pair presents a frame from a video watched by a test subject and the corresponding image generated by the neural network based on brain activity. Credit: Grigory Rashkov/Neurobotics

Researchers from Russian corporation Neurobotics and the Moscow Institute of Physics and Technology have found a way to visualize a person’s brain activity as actual images mimicking what they observe in real time. This will enable new post-stroke rehabilitation devices controlled by brain signals. The team published its research as a preprint on bioRxiv and posted a video online showing their “mind-reading” system at work.

To develop devices controlled by the brain and methods for cognitive disorder treatment and post-stroke rehabilitation, neurobiologists need to understand how the brain encodes information. A key aspect of this is studying the brain activity of people perceiving visual information, for example, while watching a video.

The existing solutions for extracting observed images from brain signals either use functional MRI or analyze the signals picked up via implants directly from neurons. Both methods have fairly limited applications in clinical practice and everyday life.

The brain-computer interface developed by MIPT and Neurobotics relies on  and electroencephalography, or EEG, a technique for recording brain waves via electrodes placed noninvasively on the scalp. By analyzing brain activity, the system reconstructs the images seen by a person undergoing EEG in real time.

“We’re working on the Assistive Technologies project of Neuronet of the National Technology Initiative, which focuses on the brain-computer interface that enables post-stroke patients to control an exoskeleton arm for neurorehabilitation purposes, or paralyzed patients to drive an electric wheelchair, for example. The ultimate goal is to increase the accuracy of neural control for healthy individuals, too,” said Vladimir Konyshev, who heads the Neurorobotics Lab at MIPT.

Figure 2. Operation algorithm of the brain-computer interface (BCI) system. Credit: Anatoly Bobe/Neurobotics, and @tsarcyanide/MIPT

In the first part of the experiment, the neurobiologists asked healthy subjects to watch 20 minutes of 10-second YouTube video fragments. The team selected five arbitrary video categories: abstract shapes, waterfalls, human faces, moving mechanisms and motor sports. The latter category featured first-person recordings of snowmobile, water scooter, motorcycle and car races.

By analyzing the EEG data, the researchers showed that the brain wave patterns are distinct for each category of videos. This enabled the team to analyze the brain’s response to videos in real time.

Read the full article at techxexplore