From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!cis.ohio-state.edu!sei.cmu.edu!fs7.ece.cmu.edu!crabapple.srv.cs.cmu.edu!andrew.cmu.edu!fb0m+ Sun Dec  1 13:06:40 EST 1991
Article 1755 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!cis.ohio-state.edu!sei.cmu.edu!fs7.ece.cmu.edu!crabapple.srv.cs.cmu.edu!andrew.cmu.edu!fb0m+
>From: fb0m+@andrew.cmu.edu (Franklin Boyle)
Newsgroups: comp.ai.philosophy
Subject: Re: Searle (was .....)
Message-ID: <EdBeY9i00WBME1JoNV@andrew.cmu.edu>
Date: 29 Nov 91 20:54:33 GMT
Organization: Cntr for Design of Educational Computing, Carnegie Mellon, Pittsburgh, PA
Lines: 81

Stanley Friesen writes:

>>In article <AdAd1qK00Uh7M2NfoH@andrew.cmu.edu> fb0m+@andrew.cmu.edu
(Franklin >>Boyle) writes:
>>|Interesting. How does the system build this representation such that 
>>|the representing entities are causal within the system?  That is, where
>>|are the matchers (i.e. programs) that enable them to be causal so that
>>|the system can achieve its complex goals?  Did
>>|you already have routines in there that "knew" what they
>>|were looking for in order to build the representation, or did this
>>|all happen spontaneously, driven by the forms of the signals from the 
>>|objects and structural relationships in the environment?  Seems to me
>>|that unless it was the latter this is no different, with respect to an
>>|intrinsic capacity for reference, than hand coding the representation.
 
>Why should this be that important?  Unless you accept the 'tabula rosa'
>model of human development, then even we are born with already existing
>routines the 'know' what to 'look' for.  And this latter is indeed what
>most current neurologists and psychologists accept.  *We* are preprogrammed
>to find meaning, so why should it matter if the computer is also?

In what sense are *we* preprogrammed to "find meaning"?  There does appear
to be experimental evidence that certain areas of the visual cortex, for
example, contain groups of neurons which react selectively to particular
features of the visual stimulus.  But presumably these are "lower level"
perceptual processes which may just be enhancing the input in certain
ways before it reaches other parts of the brain (who knows?).  On what basis 
do "most current neurologists and psychologists accept" that we are pre-
programmed throughout the cortex, especially in those areas where "higher
level" cognitive processing presumably takes place?

And what does it mean for *us* to be preprogrammed?  For a computer, it 
means building a set of pattern matchers that enable the system to function 
according to the meanings of its inputs, meanings which *we* project onto 
them -- of course *physically* the process of pattern matching depends 
only on the input forms (we say it depends on our projected meanings 
because we desire the programs and input to yield consistent intepretations 
and so we program accordingly).  Now, is the physical process of pattern 
matching as takes place in digital computers the same as the physical 
processes which store and manipulate input to the brain (and I don't mean the 
same medium or physical objects which are obviously different -- I mean 
causally the same in the sense of "how" the input brings about a physical 
change)?  It seems likely that if they are physically different, then that 
may constitute the physical underpinnings of why a programmed digital 
computer cannot "understand", as Searle suggests. In other words a candidate
for Searle's so-called "causal properties".

If you want a reference which discusses what these causal properties might 
be, or be based on, see:

Boyle, C.F. (1991) "On the physical limitations of pattern matching" _Journal 
       of Experimental and Theoretical Artificial Intelligence_ (3) 191-218.

I believe (though correct me if I'm wrong) that this paper is the first
attempt to physically determine what Searle's "causal properties" might
be. It's only a beginning, but I believe it's the right approach to the
problem (I currently have other draft versions of papers on this topic in the 
queue).

>>|Also, Haugeland draws a similar conclusion about semantics as you do
>>|("Artificial Intelligence and the Western Mind", 1989 -- sorry
>>|I don't have the full reference at hand).  His, though, depends on
>>|the fact that the number of consistent interpretations of particular
>>|symbol systems is highly constrained.  Unfortunately, this still does not
>>|solve the problem of intrinsic meaning.
 
>Intrinsic meaning?????  What is that?  I know of no examples of any such
>thing even in humans. All human symbol systems (aka languages) are purely
>arbitrary, and *learned*.  There is nothing *intrinsic* about them.

Insofar as there are "mental symbols" (call them mental states if you like) 
in the head, they are about things in the external world by virtue of their 
physical forms and the physical processes which manipulate them, only.  
The meanings of symbols in computers, on the other hand, are projected onto 
them by us, that is, they are not intrinsic and so do not arise solely by 
virtue of *their* forms and the physical processes that manipulate them.  
That is why we say they are interpreted and why human symbol systems can be
purely arbitrary. The explanation for this difference, though, depends on 
fundamental physical differences alluded to above.

-Frank


