From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!utgpu!pindor Mon Mar  9 18:35:51 EST 1992
Article 4319 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!utgpu!pindor
>From: pindor@gpu.utcs.utoronto.ca (Andrzej Pindor)
Subject: Re: Definition of Understanding
Message-ID: <1992Mar6.191040.27904@gpu.utcs.utoronto.ca>
Organization: UTCS Public Access
References: <0dhcBvu00WBL84fL5x@andrew.cmu.edu>
Date: Fri, 6 Mar 1992 19:10:40 GMT

In article <0dhcBvu00WBL84fL5x@andrew.cmu.edu> fb0m+@andrew.cmu.edu (Franklin Boyle) writes:
>Andrzej Pindor writes (in response to my post):
>
>>Are you suggest that anyone can reasonably expect the CR to know what
>hamburger 
>>looks like without having this information in its database?
>
>Two things.  First, no I don't, that was my point.  But apparently some
>people do when they claim the CR understands.  Second, what kind of info
>is what a hamburger looks like for a computer, *not* us (this, of course,
>is the $64K question).  No matter what information a computer has that *we*
>say is about hamburgers, it is nothing but combinations of high and low
>voltages, whether or not *we* see these as a bitmap of the appearance of a
>hamburger, or a propositional description of hamburgers (this is just
>the Proposition/Mental Imagery debate). Why?  Since the computer is a
>*physical* pattern matching system, it makes no difference what these
>voltage combinations are, as long as there is a matcher that physically
>"fits" it and then, due to such a physical match, triggers
>the appropriate changes to begin.  The trigger signal, which may
>be a voltage change (e.g. high to low) in no way reflects the
>structure of the pattern that was matched.
>
So humans are *non-physical* pattern matching systems? How do we do it?
The existing experimental (as opposed to speculative) evidence seems to suggest
that the information the brain has is a combination of high and low voltages
and perhaps arrangement of molecules (like parts of computer memory). Do you
have any other suggestions?

>Now, waxing a bit subjective -- trying to
>get into the computer's mind, so to speak -- if it makes *no* difference
>what the physical structures of the computer's representations are, then
>how can an image of a hamburger come to the "computer's mind" analogous
>to the mental image we experience when we hear the word?  If the physical

Very hard to say, in particular that we do not know how a mental image arises
in our mind. Or perhaps you know and that is why you are so sure that hamburger
image can not come to the "computer's mind"  in a way analogouos to the way
we experience it? Why don't you share with us this knowledge?

>structures (and hence the procedures that make them causal when implemented)
>are arbitrary, where does the structural information come from that, for
>us, allows us to experience the same sensation we get when we actually

Why are you sure that a computer could not have the same *sensation* when
accessing a picture of hamburger in memory and when actually seeing one?
I am really puzzled!

>see a hamburger?  It probably means we are not physical pattern matchers

How do you estimate this probability and what is alternative to 'physical
pattern matchers'? Please do not say 'non-physical..', unless you are prepared
to make some sensible suggestion what this 'non-physical' might be. Otherwise
it means NOTHING.

>like computers are (though functionally we are pattern matchers, of course).
>This means, therefore, that you can build computer systems that do formal
>symbol manipulation until you're blue in the face, and you won't ever get
>one that understands even close to the way we do.
>
>> Do you expect 
>>a blind person to know what hamburger looks like? And yet he/she could 
>>understand the story, right?
>
>I do expect them to know what a hamburger "somatosensorily" looks like.
>But without touch, no.  However, the above implies that a digital computer
>not only does not, but *cannot* know what a hamburger looks like regardless
>of what peripheral devices you hook up to it.  It does not process the
>kind of informational structures that are found on the primary visual and
>somatosensory cortices. Period.  Because of the physical process of pattern

Is there a reason why it couldn't? Either you know what this 'informational
structures' are and then you could make computers to implement them (do you
agree?), or you do not know and then you can't say that a computer could not
do it, right?

>matching, such information is, in essence, always arbitrarily encoded.  
>Searle's "causal property" was right on the mark, but he was unable to do 
>anything with it.
>
>>Does a series of electrical signals send by the brain to a hand to draw
>>a hamburger indicate that the final drawing is an image of a hamburger?
>
>Look at the signal just as it reaches the Cerebellum, not once it leaves.
>Its structure is, I will guess, extended much like an image of a hamburger and
>probably with other structure folded in (no little photographs or anything).
>But pattern matching systems have nothing of the kind. So, no, the

Is there a reason that they could not have? (if I understand you correctly, you
mean computer-based 'pattern matching systems', right?)

>series of electrical signals going down your arm do not indicate the
>final drawing, but, unfortunately, that's the *only* kind of information
>the computer *can* have, and the reason is that it's a *physical*
>pattern matching system.
>
>>Either we know in which form our brain gets images (or will know in future)
>>and then we can give the visual info to the CR in this form and your
>objections
>>become invalid, or we don't (and we don't) and then we can't expect the CR
>>understanding to have the visual component.
>
>No, we can't give it that information for the reasons above. Remember, the

Sorry, but I do not see these reasons. If this information is physically encoded
then *in principle* we can give it to CR and CR can process it. Of course, you
might escape into this 'non-physical' stuff, but unless you have any evidence
that such thing exists, it is religion, not science. 
BTW, I have nothing against religion (at least personal one, organized is a 
different thing), but it is good not to confuse these two things. 

>CR is a formal symbol manipulator, and that's what Searle is railing
>against.  He didn't ever say we couldn't build artificial intelligences.
>
>-Frank


-- 
Andrzej Pindor
University of Toronto
Computing Services
pindor@gpu.utcs.utoronto.ca


