From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!cis.ohio-state.edu!sei.cmu.edu!fs7.ece.cmu.edu!crabapple.srv.cs.cmu.edu!andrew.cmu.edu!fb0m+ Mon Dec  9 10:47:58 EST 1991
Article 1856 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!cis.ohio-state.edu!sei.cmu.edu!fs7.ece.cmu.edu!crabapple.srv.cs.cmu.edu!andrew.cmu.edu!fb0m+
>From: fb0m+@andrew.cmu.edu (Franklin Boyle)
Newsgroups: comp.ai.philosophy
Subject: Re: Searle (was.....)
Message-ID: <8dDECBa00iV1A1k8AU@andrew.cmu.edu>
Date: 4 Dec 91 16:34:21 GMT
Organization: Cntr for Design of Educational Computing, Carnegie Mellon, Pittsburgh, PA
Lines: 65

Stanley Friesen writes:

> Let's see if I can clarify this.  We have an innate tendency to find
patterns,
> a tendency so strong that we often see patterns where there are none. [The
> .....
> An infant's *instinctive* (that is built-in) exploratory behavior is designed
> to produce a correlation between the various sensory modalities (such as
> to associate certain visual cues with certain tactile and motor responses).
> This association seems to be the basis of internalized mental models.

I quite agree that there are innate abilities for accomplishing certain,
low level stimulus processing as well as various built-in
stimulus-response 
pathways.  And the refinement of these can probably be explained in terms
of simple feedback mechanisms.  And the sensory stimuli processed by these
facilities eventually influences our building of mental models.  But the 
building of such models, even if they are rather simple, would seem to be  
associated with higher-level cognitive processes where, I believe, innateness 
plays a much less important (if any) role.

> How about a set of algorithms for building pattern matchers?  Perhaps along
> with suggestions about which sensory modalities each one is best suited to
> dealing with.
> 
> In addition there would be a meta-program to drive the meta-generators.
> This itself would be a pattern matching program that attempts to apply the
> appropriate generator or set of generators to each unrecognized environmental
> stimulus.
>  
> And of course each level is programmed to use prior successes and failures
> in deriving heuristics for selecting pattern matching mechanisms.

This is all very computational sounding.  Regardless of how many meta-levels
you propose, somewhere there is going to be an 'origin of the matcher' problem
if pattern matching is the physical process through which your theory is
realized.  At these levels (cognitive), my claim is that 1). there are 
essentially no innate mental functions (there may be for language, at some 
level) and 2). the amount of indirection between stimulus and response is just 
too great to create matchers via simple feedback mechanisms.

> Again, I am not entirely sure why this is even relevant!  The preprogrammed
> part is the *meta* level.  Any given pattern matching process in a 'mature'
> human, or AI system, would be constructed internally by the system to meet
> some established need. (Oops, oh yes, *basic* *needs* are also
preprogrammed).
> [Of course the process of generating pattern mathcers and mental models can
> easily lead to the development of derived needs].

The question is; *How* is it constructed?  For higher level processes
and pattern matching, this seems implausible.

> But are they?  As I have pointed out above all 'mental symbols' seem to be
> *learned* *associations*.  They are in no way a priori, they are derived.
> Thus any system that is capable of deriving its own internal symbology in
> the course of interacting with its environment (whatever that environment
> may be) is, at least potentially, intelligent.

Again, how does it derive its own internal symbology such that the symbols
are causal in the system according to the particular symbols they are?  For
pattern matching systems, you need a matcher to effect this causality, and
I think that standard feedback mechanisms for creating these matchers are
highly implausible for higher-level cognitive processes.

-Frank


