From newshub.ccs.yorku.ca!torn!utcsri!rutgers!sun-barr!cs.utexas.edu!uunet!trwacs!erwin Tue Jul 28 09:41:36 EDT 1992
Article 6473 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:6473 sci.cognitive:210
Path: newshub.ccs.yorku.ca!torn!utcsri!rutgers!sun-barr!cs.utexas.edu!uunet!trwacs!erwin
>From: erwin@trwacs.fp.trw.com (Harry Erwin)
Newsgroups: comp.ai.philosophy,sci.cognitive
Subject: The Brain as a Submarine Combat System
Keywords: cognition
Message-ID: <667@trwacs.fp.trw.com>
Date: 17 Jul 92 12:05:57 GMT
Followup-To: sci.cognitive
Organization: TRW Systems Division, Fairfax VA
Lines: 83


The Brain as a Submarine Combat System
Harry Erwin

The architecture of submarine combat systems has evolved over the last 50
years to an interesting similarity to the human brain, and consideration
of these similarities may give insight into cognitive function. The
presentation here is deliberately quite general, since the goal is
understanding of human cognition rather than submarine combat systems.

Your typical submarine combat system consists of two major functions:
sensor processing and operations control. Sensor processing is multi-modal
and operates with multiple parallel streams to search, localize, track,
and identify targets of interest. A target is typically characterized by
its unique sound spectrum, and that and position/velocity are used to
merge data from multiple sonar systems. Data from other sources are merged
with the sonar data to create a situation database and display, and that
is the primary product passed to operations control.

Operations control plans and selects the response of the submarine unit to
its situation. Part of this function provides generalized motor direction,
which is implemented by the submarine crew and other elements of
operations control, and part of this function monitors the situation and
conducts tactical and operational planning.

Interestingly, the combined functions of situation monitoring and
operational planning are not understood in any depth. Development in
these areas usually concentrates on working with the submarine crews to
define effective displays and data structuring, but there is little known
about how a situation _should_ be monitored. Planning is also poorly
understood. The basic goal of most existing automated tactical systems is
to support the efficient implementation of the commander's will, as made
known to his supporting organization. In other words, the question is
begged. When I investigated this in detail from the standpoint of
behavioral biology, I found myself in a quicksand of chaotically evolving
processes.

Now to the human brain. The parallels between Sensor Processing and the
sensory processing in the human brain are deep and detailed, to the point
that many of the same keys appear to be used. Based on Young's recent
paper (Nature, 7/9/92), the brain performs pattern recognition and object
localization/tracking in parallel, with the products of each fused in the
frontal cortex using some correlating key. This seems to produce a
merged display that consists of object positions and motion, annotated
with pattern identification. The fused image makes up an internal
enviromental model that the "ego" operates within.

The Operations Control function has less detailed parallels. The brain
appears to support action beam generators and cognitive beam generators,
which are modulated via inhibition by multiple layers of censors. The
action beam generators seem to key to the general situation and produce a
non-specific beam of motor commands to respond to that situation. The
cognitive beam generators appear to generate a non-specific beam of
initiators, each of which can replay a memory trace, trigger an action
beam, or trigger further cognitive beams. The censors inhibit specific
elements of those beams so as to make each response even more specific to
to the situation and also to create a changing "movie" of operations. For
sensory memory, the replay also seems to involve the operation of censors
affecting a generalized sensory data stream to reconstruct a sequence of
fused display images. If it looks like a complex hierarchy of Turing
machines, that's probably what it is.

The fly in the ointment for this model is how it is trained. If we
consider cerebellar (motor) memory, we discover that the motor cortex is
capable of concentrating on a motor task to control specific components of
the motor beam. The cerebellum learns these adjustments and the
circumstances under which they are to be made, freeing the motor cortex
from making them in the future. But _how_ does the motor cortex _select_
the specific sequence of motor actions? If it is simply modulating a beam
of motor or cognitive commands, who generates that beam? There is an
infinite regress possible here. We usually solve that problem in real
systems by short-circuiting it--by introducing a man in the loop. An
infinite regress is allowable in the brain if it converges, but dynamic
programming ("jillions of special cases") warns us that convergence cannot
be taken for granted.

If we solve this problem by introducing a system element that is _not_ a
beam generator or censor, but has some sort of distinct supervisory role,
we've got a "ghost in the machine," which we don't want, unless we can
define it in a way that allows us to mechanize it.
-- 
Harry Erwin
Internet: erwin@trwacs.fp.trw.com


