David Crow
david.crow @ a c m . o r g

Section B - Question 4
Perception-based and Meaning-based Knowledge Representations

4. In "Cognitive Psychology and its Implications", John Anderson draws a distinction between perception-based and meaning-based representation. What kinds of evidence support this distinction for information received in verbal form? In pictorial form? Are there alternative interpretations of this evidence and are they plausible? Why so, or why not? What are some of the theoretical issues raised by Anderson’s distinction? What sorts of observations might be made to clarify these issues?


There exists a large body of work that suggests a distinction between verbal and pictorial based knowledge representation. Much of the work that suggest this distinction comes from the field of imagery. The work of Santa (1977, in Anderson, 1995) illustrates the difference between verbal and visual representation. In Santa’s experiment there are two conditions: (a) a geometric condition; and (b) a verbal condition. The participants were presented with an array of objects (either geometric or verbal) and allowed to study it. After the array was removed and the participants were immediately presented with one of a number of test arrays. The test array could be consist of the same elements presented in the same configuration, the same elements presented in a linear configuration, different elements presented in the same configuration or different elements presented in a linear configuration. The verbal array was identical to the geometric array except instead of geometrical shapes, the objects were the words for the corresponding geometric shape. Santa predicted that participants in the geometric study conditions would encode the information in a way that preserved the spatial component of the display and would be fastest on the test condition that had this spatial component in it, i.e., in the same elements and same configuration condition. In the verbal condition, Santa predicted that participants would be fastest in recalling the identical elements and linearly position condition. Both of these predictions were confirmed suggesting that verbal and pictorial information is encoded differently, i.e., visual information tends to be stored according to spatial position and that other information, such as words, tend to be stored according to linear order.

There is evidence for verbal information that people are able to normally remember its meaning and not its exact wording. The example above shows that people are able to retain the exact information in a verbal representation in a recognition task of words. However the work by Wanner (1968, as cited in Anderson, 1995) illustrates circumstances in which people do not remember information about the exact wording. Wanner divided the participants into two groups: one who received warning that they were being tested for the ability to recall particular sentences and another that did not receive warning. At a later point in the instructions the critical sentence appeared and immediately following the critical sentence all participants heard a conclusion to the instructions. The participants were then presented with the critical sentence they had just heard plus a similar alternative. This difference could be either stylistic, i.e., one that did not contribute to the meaning of the sentences or the sentences could differ in meaning. There was no difference in percentage correct that the warned versus the unwarned groups could identify for sentences that tested memory for meaning. However, participants were al most at chance for remembering stylistic change when unwarned, but they were fairly good at remembering it when warned. This results suggests that the exact wording is not naturally retained but it can be retained when cued to pay attention to this information. Even with this warning, meaning is still encoded better than stylistic information.

The memory for visual information has been studied by a number of different investigators including Shepard (1967, as cited in Anderson, 1995), and Standard (1973, as cited in Anderson, 1995). Standing (1973) presented participants with 10 000 images and then asked them to recognize these images. Participants made on 17 percent errors after studying these 10 000 images in a recognition test. Mandler and Ritchey (1977, as cited in Anderson, 1995) asked participants to study 8 pictures for 10 seconds each. The participants recognition memory was tested using the exact pictures and some distractor pictures. The distractor pictures could be either a token distractor in which the changed detail of the image is believed to have relatively little importance to the meaning of the picture, or a type distractor in which the visual detail changed is relatively more important to the meaning of the picture. All eight picture shown to the participants contained possible token and type distractors. There was not systematic difference in the amount of physical change to the image in a type versus a token change. Participants were able to recognize the original pictures 77 percent of the time, reject the token distractors on 60 percent of the time but reject the type distractors 94 percent of the time. This suggests that participants are more sensitive to the meaning-significant changes in a picture.

The separation of perception-based and meaning-based knowledge representation by Anderson (1995) leads to the suspicion that there are separate constructs in the brain to deal with perceptual information and semantic knowledge. There is evidence that there is no difference between perceptual and meaning based knowledge. This evidence is based in a parallel distributed model of information processing (PDP) and a distributed representation of knowledge where the knowledge is contained in patterns of activity and in the weights between neurons. This model where knowledge is contained in the connections is very plausible. PDP networks are neuron-like models of brain processing. Therefore any information and processing in the PDP network can be considered to be very similar to brain style processing. PDP networks can be used to explain both perceptual based knowledge, see McClelland and Rumelhart’s (1981, as cited in Anderson, 1995) interactive activation model of word perception from letter features, and PDP networks can be used to represent higher level meaning-based knowledge including schemata (Rumelhart Smolensky, McClelland, & Hinton, 1986) Anderson’s distinction between perception-based and meaning-based knowledge leads to a distinction between the types of representations and processes used for each of these in the brain. PDP models do not require separate processes for the encoding of perceptual and meaning based knowledge. Some larger PDP models that were capable of both perceptual and meaning knowledge could clarify many of these issues.

Anderson, J. R. (1995). Cognitive Psychology and its Implications (4th ed.). W.H. Freeman and Company, New York.
Rumelhart, D.E., Smolensky, P., McClelland, J.L., & Hinton, G.E. (1986). Schemata and sequential thought processes in PDP models. In McClelland, J.L. & Rumelhart, D.E. Parallel Distributed Processing Explorations in the Microstructure of Cognition (Vol 2): Psychological and Biological Models. MIT Books: Cambridge, MA.