From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!cis.ohio-state.edu!sei.cmu.edu!fs7.ece.cmu.edu!crabapple.srv.cs.cmu.edu!andrew.cmu.edu!fb0m+ Mon Mar  9 18:35:30 EST 1992
Article 4291 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!cis.ohio-state.edu!sei.cmu.edu!fs7.ece.cmu.edu!crabapple.srv.cs.cmu.edu!andrew.cmu.edu!fb0m+
>From: fb0m+@andrew.cmu.edu (Franklin Boyle)
Newsgroups: comp.ai.philosophy
Subject: Re: Definition of Understanding
Message-ID: <0dhcBvu00WBL84fL5x@andrew.cmu.edu>
Date: 5 Mar 92 20:24:27 GMT
Organization: Cntr for Design of Educational Computing, Carnegie Mellon, Pittsburgh, PA
Lines: 72

Andrzej Pindor writes (in response to my post):

>Are you suggest that anyone can reasonably expect the CR to know what
hamburger 
>looks like without having this information in its database?

Two things.  First, no I don't, that was my point.  But apparently some
people do when they claim the CR understands.  Second, what kind of info
is what a hamburger looks like for a computer, *not* us (this, of course,
is the $64K question).  No matter what information a computer has that *we*
say is about hamburgers, it is nothing but combinations of high and low
voltages, whether or not *we* see these as a bitmap of the appearance of a
hamburger, or a propositional description of hamburgers (this is just
the Proposition/Mental Imagery debate). Why?  Since the computer is a
*physical* pattern matching system, it makes no difference what these
voltage combinations are, as long as there is a matcher that physically
"fits" it and then, due to such a physical match, triggers
the appropriate changes to begin.  The trigger signal, which may
be a voltage change (e.g. high to low) in no way reflects the
structure of the pattern that was matched.

Now, waxing a bit subjective -- trying to
get into the computer's mind, so to speak -- if it makes *no* difference
what the physical structures of the computer's representations are, then
how can an image of a hamburger come to the "computer's mind" analogous
to the mental image we experience when we hear the word?  If the physical
structures (and hence the procedures that make them causal when implemented)
are arbitrary, where does the structural information come from that, for
us, allows us to experience the same sensation we get when we actually
see a hamburger?  It probably means we are not physical pattern matchers
like computers are (though functionally we are pattern matchers, of course).
This means, therefore, that you can build computer systems that do formal
symbol manipulation until you're blue in the face, and you won't ever get
one that understands even close to the way we do.

> Do you expect 
>a blind person to know what hamburger looks like? And yet he/she could 
>understand the story, right?

I do expect them to know what a hamburger "somatosensorily" looks like.
But without touch, no.  However, the above implies that a digital computer
not only does not, but *cannot* know what a hamburger looks like regardless
of what peripheral devices you hook up to it.  It does not process the
kind of informational structures that are found on the primary visual and
somatosensory cortices. Period.  Because of the physical process of pattern
matching, such information is, in essence, always arbitrarily encoded.  
Searle's "causal property" was right on the mark, but he was unable to do 
anything with it.

>Does a series of electrical signals send by the brain to a hand to draw
>a hamburger indicate that the final drawing is an image of a hamburger?

Look at the signal just as it reaches the Cerebellum, not once it leaves.
Its structure is, I will guess, extended much like an image of a hamburger and
probably with other structure folded in (no little photographs or anything).
But pattern matching systems have nothing of the kind. So, no, the
series of electrical signals going down your arm do not indicate the
final drawing, but, unfortunately, that's the *only* kind of information
the computer *can* have, and the reason is that it's a *physical*
pattern matching system.

>Either we know in which form our brain gets images (or will know in future)
>and then we can give the visual info to the CR in this form and your
objections
>become invalid, or we don't (and we don't) and then we can't expect the CR
>understanding to have the visual component.

No, we can't give it that information for the reasons above. Remember, the
CR is a formal symbol manipulator, and that's what Searle is railing
against.  He didn't ever say we couldn't build artificial intelligences.

-Frank


