From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rutgers!micro-heart-of-gold.mit.edu!news.bbn.com!usc!cs.utexas.edu!uunet!mcsun!uknet!edcastle!aiai!jeff Tue Mar 24 09:56:28 EST 1992
Article 4520 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rutgers!micro-heart-of-gold.mit.edu!news.bbn.com!usc!cs.utexas.edu!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: The Systems Reply I
Message-ID: <6419@skye.ed.ac.uk>
Date: 17 Mar 92 23:16:19 GMT
References: <1992Mar14.213045.21776@mp.cs.niu.edu> <1992Mar16.224423.29809@psych.toronto.edu> <centaur.700790865@cc.gatech.edu>
Sender: news@aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
Lines: 37

In article <centaur.700790865@cc.gatech.edu> centaur@terminus.gatech.edu (Anthony G. Francis) writes:
>To Searle, no computer can have semantics, and therefore
>no computer, _no matter what its functionality, no matter how close its
>behavior is to a humans, no matter =how= indistinguishable it is from
>you or I in any behavioral observable_, can ever be considered intelligent.

So this is a "functionalist" definition, rather than an operational
or even behavioral one?

>This, I think, is the big problem with the Chinese Room. It's an attempt
>to show that a functionalist definition of intelligence is insufficent,
>and as such is the first logical step towards denying the existence of any
>minds other than our own.

A familiar argument.  If you don't accept the Turing Test (or
similar), you're a skeptic about other minds.  And everyone knows no
one will argue for such skpeticism; so it's thought that this point
will defeat any criticism of the Turing Test.

Well, as I've said before, it might also matter how the computer
works.  We can look at the program, after all.  Maybe some kinds
of programs produce understanding and others don't even if both
produce the right behavior.  (Assuming that programs can produce
understanding at all.)

Since this isn't actually an argument against AI, I'm surprised
at how few people in this newsgroup are willing to accept it.
But it seems that, if anything, more people in comp.ai.phil are
interested in defending the Turing Test than are interested in
defending the possibility of machine intelligence.

If we're going to try to line up the arguments on both sides
(as I think McDermott suggests), let's do it for the Turing Test
too.  A defeat for the TT would make the entire discussion much
more reasonable, IMHO.

-- jd


