Review dreams through video clips

Using functional magnetic resonance imaging (fMRI) and computer simulated models, researchers working at the University of UC Berkeley in the United States have succeeded in decoding and recreating images. noted by the audiences' brains , in this case, being shown a Hollywood movie trailer.

Scientists working at the University of California, Berkeley believe that in the future, through the combination of modern brainstorming techniques and computer simulations, they can learn the mind of the disease. The coma is in a coma, and converts this data into a video clip, or helps someone to review their dream on YouTube.

However, at present, this technique can only help recreate the video clip from the brains of the audience that was previously shown. However, this is really a breakthrough that paves the way for the reproduction of image data inside our heads, such as our own dreams and memories that no one has ever seen, into real video clips, according to researchers.

Picture 1 of Review dreams through video clips

"This is a big step towards recreating the image from the brain , " said Professor Jack Gallant, a neurologist, working at UC Berkeley, USA and co-author of the study. .

"We can explore our minds through video clips . " The results of this study were published online in Current Biology, September 22, 2011.

Ultimately, the practical applications of this technology may include a better understanding of what is going on in the minds of patients who cannot communicate verbally, such as sudden victims. stroke, comatose patient and people with neurodegenerative disease.

This technique can also lay the foundation for machine interface development - the brain to help people with cerebral palsy or paralysis, for example, can understand their thoughts through computer systems connected to their minds and give psychological help.

However, the researchers point out: This technique allows users to read other people's thoughts and intentions, as described in the classic sci-fi work, "Brainstorm" , in which , scientists record a person's feelings so others can experience it.

Previously, researcher Gallant and colleagues noted the activity of the visual cortex within the brain when showing volunteers black and white subject images. At the same time, the researchers built a simulation model on the computer that allowed them to predict with visual accuracy the topic being shown to viewers through decoding the recorded images. received by the brain of the viewer.

In their latest experiment, researchers say they have solved a much more difficult problem by actually decoding the brain signals generated by moving images.

"What we have in this study is really like watching a movie," said Shinji Nishimoto, the study's lead author and a researcher and a Ph.D. student in Gallant's laboratory. "In order for this technology to have a wide range of applications, we must understand how the brain handles dynamic visual experiences."

Nishimoto and the other two members of the research team were the subjects for this experiment, because the experiment sequence required volunteers to stay inside the MRI scanner for hours for each experiment.

They were shown two separate Hollywood footage clips, while magnetic resonance imaging (fMRI) was used to measure blood flow through the visual cortex, part of the brain processing image information. . On computers, the brain is divided into small regions, three-dimensional shapes are called volumetric pixels, or "three-dimensional pixels".

"We build a model for each three-dimensional pixel (voxel) that describes the shape and motion information in the film mapped into brain activity , " Nishimoto said.

Picture 2 of Review dreams through video clips

Brain activity was recorded while subjects, having seen the first episode of the video clip updated by a computer program capable of self-learning, the process continues, will create links Model images in movies with the corresponding activities of the brain.

Brain activity evoked by the second set of clips was used to test the reconstruction algorithm into a movie. This was done by integrating image processing capabilities: 18 million pixels in a second of the integrated video program on YouTube into computer programs, randomly, to help programs This can predict the activity of the brain that each movie is likely to evoke in each topic.

Finally, for every 100 clips, the computer program decides to resemble the clips that the subject is able to see, merged to create a fuzzy but continuous image reconstruction of the original movie.

Rebuilding the film using brain scans has been challenging, because the signal measuring the flow of blood flow using fMRI changes a lot slower than the nerve signals that encode the information. animated in movies, the researchers said. For this reason, previous attempts to decode brain activity have focused on still images.

"We solve this problem by developing a separate two-phase model that describes the number of nerve signals that encode basic information and signals that measure the flow of blood flow , " according to Nishimoto. . Finally, Nishimoto said, scientists need to understand: how the brain processes the animation events that we witness in everyday life.

"We need to know how the brain works in natural conditions," Nishimoto said. "To do that, first, we need to understand the mechanism of brain activity while we are watching movies."

Other co-authors of this study include: Thomas Naselaris working at the Helen Wills Institute of Neuroscience at UC Berkeley, USA; An T. Vu, UC Berkeley University; Yuval Benjamini and Professor Yu Bin, work at the Department of Statistical Science, UC Berkeley, USA.