State Attribute: auditory-input

Problem space: top-ps

This attribute does contains the input from audition, in particular, all of the words (in theory, general sounds) that are still in the phonological buffer. These features are structured very closely after those in Mark Wiesmeyer's thesis work (An Operator-Based Model of Human Covert Visual Attention).

Substructure of the attribute:

(<top-state> ^auditory-input <f>)
  (<f> ^type << speaker shape ... >>
       ^value <any>
       ^word-name <any>
       ^loc-x <num>
       ^loc-y unknown
       ^loc-z unknown
       ^relpos << l c r >>
       ^marked << no yes >>)
The value of the ^type attribute determines to a large degree the possible values of the ^value attribute. If it is a speaker, it should be the name (in the NTD domain, call letters) of an individual. Otherwise, it should be a word. The ^word-name attribute is added within cognition; it only appears for shapes which have been attended. The ^marked attribute is a reflection of whether or not the feature has been attended to.

The ^loc-x gives the simulated real time (in ms) of the arrival of the in the phonological buffer. (The other location attributes are irrelevant leftovers of the general mechanism used for all attention processes.) The ^relpos attribute gives the relationship of this number to the spotlight of auditory attention.

Features which have not yet been attended to will only have available the ^type, ^relpos, and ^marked attributes.

Implementation of input function: The creation of the structure is handled by hearing.c, but the actual placement of the attribute on the top state occurs as a result of the InputField function in inputfield.c