Newsgroups: sci.cognitive,bionet.neuroscience,comp.ai.philosophy
From: ohgs@chatham.demon.co.uk (Oliver Sparrow)
Path: cantaloupe.srv.cs.cmu.edu!das-news.harvard.edu!news2.near.net!MathWorks.Com!yeshua.marcam.com!usc!howland.reston.ans.net!news.sprintlink.net!demon!chatham.demon.co.uk!ohgs
Subject: Re: Mind Models
References: <35s56r$t3t@portal.gmu.edu> <3655uj$led@zip.eecs.umich.edu>
Organization: Royal Institute of International Affairs
Reply-To: ohgs@chatham.demon.co.uk
X-Newsreader: Demon Internet Simple News v1.27
Lines: 37
Date: Mon, 26 Sep 1994 08:23:29 +0000
Message-ID: <780567809snz@chatham.demon.co.uk>
Sender: usenet@demon.co.uk
Xref: glinda.oz.cs.cmu.edu sci.cognitive:5199 comp.ai.philosophy:20628

I had a go at a similar bunch of concepts (entered under the heading "the 
colour red II"). If one has a detector of a commonly-encountered quality of
the stream of percept - "red" - then, if replicated sufficiently, each 
subdivision of the stream will arrive with a score for its possession of that
quality. Two such qualities are open to fuzzy logic boundary setting: in the
space spanned by the two dimensions, various areas come to be recognised as
flags for the presence of higher-level qualities. Systems come to draw on
these areas in the percept space as input into the generation of higher order
abstractions. These can build and build: a system the activation of which
as paralleled by "pleasurable social activity going on" lights up and so, in
coincidence, does another system: round+red+smell of rubber+.... = "ball"; 
and the two together serve to activate the motor skills which the toddler
has been acquiring. One notes that the brain appears to undergo "warming up"
of areas which *may* be needed: perhaps through chaotic signals, perhaps
by means of diffusable compounds, perhaps through ramifying, "voting" systems
which are built out of heirarchies whose sub units have become aroused.

My point is that this approach seems to allow three things which we know to
be true:

1: Hierarchical associations of data into high level abstractions.
2: Looped systems of feedback and self-reference by which such associations
   develop, are re-inforced or pruned and by which they learn.
3: Abstractions from associative logic into abstract, symbol based systems
   of reasoning, such that these can influence (and be influenced by)
   the more primary forms of association.

One of the difficulties of AI - at least as practiced by some of its 
proponents - is that it has strated with (3) and is wrestling as to how to 
get down to (1). Issues such as "grounding", as the representation of 
"common sense" knowledge, as pattern detection and the identification
of significant novelty all stem from this buttock-led approach.

_________________________________________________

  Oliver Sparrow
  ohgs@chatham.demon.co.uk
