From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!aunro!alberta!ubc-cs!uw-beaver!micro-heart-of-gold.mit.edu!wupost!cs.utexas.edu!uunet!mcsun!uknet!edcastle!aiai!jeff Thu Feb 20 15:21:58 EST 1992
Article 3851 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!aunro!alberta!ubc-cs!uw-beaver!micro-heart-of-gold.mit.edu!wupost!cs.utexas.edu!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Reference (was re: Multiple Personality Disorder and Strong AI)
Keywords: consciousness,functionalism,meaning
Message-ID: <6205@skye.ed.ac.uk>
Date: 18 Feb 92 22:57:07 GMT
References: <1992Feb13.201109.25439@psych.toronto.edu> <418@tdatirv.UUCP> <1992Feb16.185120.9182@psych.toronto.edu>
Sender: news@aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
Lines: 23

In article <1992Feb16.185120.9182@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
>In article <418@tdatirv.UUCP> sarima@tdatirv.UUCP (Stanley Friesen) writes:

>>It is only by *trying* to make such a computer that the answer can be found.
>
>Stanley, you have obviously missed Searle's point.  His claim is that
>even if we make a computer which *behaviourally* acts like it is 
>conscious, it still won't be.  Note that his proof does not rely on
>a difference in the "observables" between human behaviour and computer
>behaviour, and so therefore is not decidable empirically.

I agree wiht both of you.  Searle's argument is as MG says.  If he's
right (about the Chinese Room not understanding, about the CR being
an adequate repreaentative of comuters running programs in general,
etc), then there's no way to make programs that understand.

On the other hand, if Searle has failed to prove his conclusion,
then it's still possible that we will eventually learn how to
make programs that understand.  At the same time, we will learn
more about how humans work, and about many other issues.  So it
may be that we'll eventually be in a better position to answer
these questions, even if there's still an irreducable philosphical
component to them.


