The 'nl-soar' executable and source

Much of the normal development of NL-Soar is done in a standalone system, rather than within an agent such at TacAir Soar or NTD-Soar. This environment requires a number of extra feature beyond those provided by the normal Soar release, including things such as a phonological buffer, a way of measuring cognitive time, and graphical debugging tools for viewing the syntactic and semantic structures. These additions have been made in an executable called nl-soar, which can be executed from /afs/cs/project/soar/utc/nl/bin/nl-soar. This document describes the environment provided, and the source code used to construct these extra features.

SimTime temporal model

Some experiments with NL-Soar require a mapping between NL-Soar's processing and the amount of cognitive time that we predict a human would require to do the same processing. Plain Soar provides no direct correspondence between the real time taken by the executable (which varies depending on the processor, the load, etc) and the "virtual time" of the human being modeled. SimTime is incorporated into nl-soar to provide this correspondence. In NL-Soar, we would typically like to see each operator correspond to 50 msec of virtual cognitive time. However, for development purposes, we almost always set operators to 10 msec instead.

Documentation Source

Phonological buffer model

NL-Soar extends Soar with a model of a phonological buffer (roughly based on the work of Baddeley and Wiesmeyer). This provides an input buffer that allows the model to postpone processing words immediately upon their arrival. Without the buffer, we might lose each word as the next one arrived, forcing us to abort processing and move to the next word to ensure that we have an opportunity to consider every word. Instead, with the buffer, words are available for at least 2 seconds (of virtual time) before they disappear, so we postpone this "use it or lose it" decision.

The phonological buffer evolved from work on NTD-Soar, which tried to extend the Mark Wiesmeyer's work on visual attention to model auditory attention as well. It has been further modified and extended by work on nl-soar-sphinx, but we will only describe the basic version here.

Author: ghn@cs.cmu.edu (Last updated 96-05-14)