From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!tdatirv!sarima Thu Feb 20 15:22:02 EST 1992
Article 3857 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!tdatirv!sarima
>From: sarima@tdatirv.UUCP (Stanley Friesen)
Newsgroups: comp.ai.philosophy
Subject: Re: Reference (was re: Multiple Personality Disorder and Strong AI)
Keywords: consciousness,functionalism,meaning
Message-ID: <426@tdatirv.UUCP>
Date: 18 Feb 92 20:58:01 GMT
References: <1992Feb13.045721.29805@cs.yale.edu> <1992Feb13.201109.25439@psych.toronto.edu> <418@tdatirv.UUCP> <1992Feb16.185120.9182@psych.toronto.edu>
Reply-To: sarima@tdatirv.UUCP (Stanley Friesen)
Organization: Teradata Corp., Irvine
Lines: 39

In article <1992Feb16.185120.9182@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
|Stanley, you have obviously missed Searle's point.  His claim is that
|even if we make a computer which *behaviourally* acts like it is 
|conscious, it still won't be.  Note that his proof does not rely on
|a difference in the "observables" between human behaviour and computer
|behaviour, and so therefore is not decidable empirically.  It instead
|relies on an analysis of the way computers operate.  In order to attack 
|Searle's claim, you have to do philosophy (however nasty you may
|find that...)

As *I* understand Searle, he is talking about being indistiguishable by
*external* *behavior*.

He is saying nothing about whether it is distinguishable by *disection*.

And what I was talking about in my statement was *not* TT test per se.
I meant that by trying to build an intelligent machine we would gain
sufficient understanding of what intelligence is to be able to determine
by inspection of the machine's internal workings whether it is or is not
actually intelligent.

And I think that, given our current ignorance about the nature of minds,
this is the only way we can get the necessary knowledge.



Now, if Searle really *is* talking about systems being *internally*
indistinguishable and *still* not having a mind - then I lose all
interest in what he is saying.  "A difference that makes *no* difference
is not a difference".  There must be *some* detectable difference in the
ineternal working of the system for his arguments to have any real-world
meaining - otherwise he is talking about a disguised dualism.  I have
absolutely no interest in, or tolerance for, purely logical distinctions
that have *no* real world correlates at all.

-- 
---------------
uunet!tdatirv!sarima				(Stanley Friesen)



