Pages in this blog

Monday, September 26, 2011

Seeing What the Brain Sees


Here's a chunk out of the press release:
Using functional Magnetic Resonance Imaging (fMRI) and computational models, UC Berkeley researchers have succeeded in decoding and reconstructing people’s dynamic visual experiences – in this case, watching Hollywood movie trailers.

As yet, the technology can only reconstruct movie clips people have already viewed. However, the breakthrough paves the way for reproducing the movies inside our heads that no one else sees, such as dreams and memories, according to researchers.

“This is a major leap toward reconstructing internal imagery,” said Professor Jack Gallant, a UC Berkeley neuroscientist and coauthor of the study published online today (Sept. 22) in the journal Current Biology. “We are opening a window into the movies in our minds.”
So, think of visual perception as real-time simulation in neural wetware of the external world driven by sensory input. Remember that the brain is actively predicting what's coming next. It's expecting certain input. That's the simulation aspect. It then, in effect, uses sensory input to calibrate and refine its prediction.

So, you're walking your familiar walk to, say, the subway stop, or your mailbox, whatever. The brain 'cues up' an enactment, a simulation, of that walk, based, of course, on all those times you've walked that walk. That simulation is 'running' in high-dimensional state space of your mind. If I interpret Walter Freeman's theorizing correctly (from a different Berkeley lab, but not associated with this work), the simulation 'meets,' becomes 'entangled with,' incoming visual sensations frame by frame—click click click click—more or less like movies. Each frame of visual experience marks an encounter between the simulation and the external world.

The Berkeley researches have found a way to 'read' the traces of the wetware simulation and reconstruct the visual scene they were tracing. Generalizing from this, the second paragraph above asserts, they expect one day to be able to visualize those simulations we run that are not met by, entangled with, visual input. Very clever.

This links to a page from Gallant's lab. There you'll find a link to the full paper (behind a pay wall), an abstract of the article, and some fascinating film clips. This one is fascinating:


The top row shows the video clip that was presented to each of three subjects. The bottom three rows depict various possible reconstructions of the input video. The leftmost image in the row shows the best reconstruction; the others are possibilities. Note the high agreement between subjects.

No comments:

Post a Comment