From newshub.ccs.yorku.ca!torn!cs.utexas.edu!swrinde!zaphod.mps.ohio-state.edu!rpi!psinntp!psinntp!snoopy!short Mon Oct 19 16:59:06 EDT 1992
Article 7269 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai:4778 comp.ai.philosophy:7269
Newsgroups: comp.ai,comp.ai.philosophy
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!swrinde!zaphod.mps.ohio-state.edu!rpi!psinntp!psinntp!snoopy!short
>From: short@asf.com (Lee Short)
Subject: Re: Human intelligence vs. Machine intelligence
Message-ID: <1992Oct13.144023.7698@asf.com>
Organization: Hughes ASF
References: <1992Sep23.162606.13811@udel.edu> <BvM75v.AEF@eis.calstate.edu> <burt.718784231@aupair.cs.athabascau.ca>
Date: Tue, 13 Oct 1992 14:40:23 GMT
Lines: 39


In article <burt.718784231@aupair.cs.athabascau.ca> burt@aupair.cs.athabascau.ca (Burt Voorhees) writes:
>  It's not that Searle doesn't understand the systems reply - he just doesn't
>buy it.  I'd guess that he just doesn't accept the behaviorist assumption
>at the basis of the Turing test.  The whole point of the Chinese Room is
>that you can pass all the behavioral tests you want but if there ain't
>nobody home, there ain't nobody home.
>

I see.  Searle's criteria for "understanding" is that there be a
recognizably conscious object involved.  Where I come from, this is
called begging the question.  

In the days when I spent time worrying about such things, I thought
the best method to attack Searle was to simply point out to him that
his argument applies equally well to humans.  The question 'Where is
the understanding in a human?' cannot be answered, just like the
parallel question about the Chinese room cannot be answered.  And to
get to the real meat of the matter, the question 'What is meant by
"understanding"?' can't be answered either -- except in behaviorist
terms.  If Searle wants me to buy his account of the impossibility of
machine understanding, he'd better come up with an account of how and
why humans differ from machines in some way that impacts the question
of understanding.  To do this, he must come up with answers to the
questions above.  Until he does, his argument simply points out some of 
the consequences of believing that machine intelligence is possible.

It's entirely possible that Searle is correct, but we don't know
enough about "understanding" to have any good reasons to believe that
he is.


Lee

-- 
short@asf.com              I'll believe in Virtual Reality when they create 
Software Janitor                           the first virtual beer.  
Lee Short              I speak for none of the many steps in the food chain 
Hughes Training, Inc.         between myself and General Motors corporation.


