Pages in this blog

Wednesday, April 27, 2016

Mapping Semantic Space to the Cortical Surface


A Continuous Semantic Space Describes
the Representation of Thousands of Object and Action Categories across the Human Brain

Alexander G. Huth,1 Shinji Nishimoto,1 An T. Vu,2 and Jack L. Gallant1,2,3,* 1Helen Wills Neuroscience Institute
2Program in Bioengineering
3Department of Psychology
University of California, Berkeley, Berkeley, CA 94720, USA *Correspondence: gallant@berkeley.edu

http://dx.doi.org/10.1016/j.neuron.2012.10.014


SUMMARY
Humans can see and name thousands of distinct object and action categories, so it is unlikely that each category is represented in a distinct brain area. A more efficient scheme would be to represent categories as locations in a continuous semantic space mapped smoothly across the cortical surface. To search for such a space, we used fMRI to measure human brain activity evoked by natural movies. We then used voxelwise models to examine the cortical representation of 1,705 object and action categories. The first few dimensions of the underlying semantic space were recovered from the fit models by prin- cipal components analysis. Projection of the recov- ered semantic space onto cortical flat maps shows that semantic selectivity is organized into smooth gradients that cover much of visual and nonvisual cortex. Furthermore, both the recovered semantic space and the cortical organization of the space are shared across different individuals. 

* * * * * 

Some remarks by the lead author, Alexander Huth:
Back in 2012 I wrote a paper about the cortical representation of visual semantic categories. I showed that pretty much all of the higher visual cortex is semantically selective, and argued that this representation is better understood as gradients of selectivity across the cortex than as distinct areas. I also made a video that explains the paper, and there's a nice FAQ on our lab website. I also made a nify online viewerfor that dataset. 

1 comment:

  1. I've speculated before Iand have even asked professionals to look into) whether or not phonosemantically trasparent vocabulary was processed more by right-hemisphere structures, given that they prototypically are accompanied by excess prosody and iconic gestures (both pointing to right hemisphere processing).JT

    ReplyDelete