From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!qt.cs.utexas.edu!cs.utexas.edu!uunet!tdatirv!sarima Mon Dec  9 10:48:26 EST 1991
Article 1904 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!qt.cs.utexas.edu!cs.utexas.edu!uunet!tdatirv!sarima
>From: sarima@tdatirv.UUCP (Stanley Friesen)
Newsgroups: comp.ai.philosophy
Subject: Re: Searle (was.....)
Message-ID: <300@tdatirv.UUCP>
Date: 5 Dec 91 20:13:21 GMT
References: <8dDECBa00iV1A1k8AU@andrew.cmu.edu>
Reply-To: sarima@tdatirv.UUCP (Stanley Friesen)
Organization: Teradata Corp., Irvine
Lines: 88

In article <8dDECBa00iV1A1k8AU@andrew.cmu.edu> fb0m+@andrew.cmu.edu (Franklin Boyle) writes:
|I quite agree that there are innate abilities for accomplishing certain,
|low level stimulus processing as well as various built-in
|stimulus-response 
|pathways.  And the refinement of these can probably be explained in terms
|of simple feedback mechanisms.

I doubt if a 'simple' feedback mechanism is sufficient to explain the kind
of learning an infant accomplishes.  I strongly suspect that a higher level
'pre-cognitive' structure is necessary to guide the feedback into useful
channels.

| And the sensory stimuli processed by these
|facilities eventually influences our building of mental models.  But the 
|building of such models, even if they are rather simple, would seem to be  
|associated with higher-level cognitive processes where, I believe, innateness 
|plays a much less important (if any) role.

At the very least a 'need' for forming higher level cognitive structures
and a certain amount of pre-wiring of basic ones seems necessary to explain
observed facts about infant development.

|This is all very computational sounding.  Regardless of how many meta-levels
|you propose, somewhere there is going to be an 'origin of the matcher' problem
|if pattern matching is the physical process through which your theory is
|realized.

I am still not quite sure what this means!  Certainly some level has to have
the physical encoding prewired, or preprogrammed, and it must have some
initial, default method of filtering the encoded data.  But I do not see
that this is anything more than a data format issue.  I suspect that any
informationally equivalent encoding scheme can be substituted for the one
the brain actually uses with no effect on the nature of the cognitive process.
[That is, the pattern matching operation in the brain is algorithmically
independent of the encoding scheme].

Are you trying to claim that the pattern matching mechanism must be tied
to the physical encoding scheme?

| At these levels (cognitive), my claim is that 1). there are 
|essentially no innate mental functions (there may be for language, at some 
|level) and 2). the amount of indirection between stimulus and response is just 
|too great to create matchers via simple feedback mechanisms.

On 1, I really doubt that linguistic ability makes use of any innate
functions that are not also used for other types of processing, otherwise
the evolution of language is too difficult (this is called 'pre-adaptaion'
in bilogical parlance).  Thus whatever mechanism we use for learning language
we also use for making other types of mental models - no special treatment
of language is really necessary.

On 2, I essentially agree, and I do not think the brain uses a *simple*
feedback mechanism to derive mental models.  But then, why should a
computer program be restricted to simple mechanisms?

|The question is; *How* is it constructed?  For higher level processes
|and pattern matching, this seems implausible.

This is an issue for neurologists to study in the lab.  I do not now claim
to have the complete answer.  But since since neural systems seem to be
mainly pattern matching systems, and since the human brain comes prewired
in a *hierarchical* structure, from neuron to region to brain, it seems likely
that this corresponds to some sort of preset pattern matching capacity.

|> But are they?  As I have pointed out above all 'mental symbols' seem to be
|> *learned* *associations*.  They are in no way a priori, they are derived.
|> Thus any system that is capable of deriving its own internal symbology in
|> the course of interacting with its environment (whatever that environment
|> may be) is, at least potentially, intelligent.
|
|Again, how does it derive its own internal symbology such that the symbols
|are causal in the system according to the particular symbols they are?  For
|pattern matching systems, you need a matcher to effect this causality, and
|I think that standard feedback mechanisms for creating these matchers are
|highly implausible for higher-level cognitive processes.

What do you mean by 'causally'?  Certainly I agree that a certain amount of
environmental manipulation may be necessary for the formsation of intelligence,
since it is required for human infants.   But why can't a computer explore
its environment and so determine the causal relationships inherent in it?

I have never claimed, and do not now, claim that *current* approaches to
AI are fully suffcient to achieve intelligence, but I see no real *theoretical*
reason why that is not just a temporary limit of current technology.
-- 
---------------
uunet!tdatirv!sarima				(Stanley Friesen)



