Supplemental web site in support of the paper:

Predicting Human Brain Activity Associated with the Meanings of Nouns,
Tom M. Mitchell, Svetlana V. Shinkareva, Andrew Carlson, Kai-Min Chang, Vicente L. Malave, Robert A. Mason, Marcel Adam Just,
Science, May 30, 2008.

  Home   Semantic Features   Feature Signatures (P1) Feature Signatures (average)  
 

Feature Signatures (participant P1)

Here you may view the learned fMRI signatures for each of the 25 intermediate semantic features, according to the computational model learned for participant P1. These 25 signatures form the basis set of fMRI activation patterns upon which all model predictions are based; that is, the prediction for each noun is the sum of these 25 fMRI activation patterns, each weighted by the co-occurrence frequency of the stimulus noun with the verb that defines the semantic feature.

Select an intermediate semantic feature: eat, touch, listen, smell, see, taste, hear, say, rub, push, lift, manipulate, move, run, approach, near, open, fill, clean, fear, drive, ride, enter, wear, break

Learned signature for intermediate semantic feature: "push pushed pushes"

Slices depict 3D image from most inferior (top left) to most superior (bottom right). Posterior is up, anterior is down. Left side of brain is on the left of each image.


Last modified on May 27, 2008 by tom.mitchell@cmu.edu