From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!christo Tue Mar 24 09:55:27 EST 1992
Article 4436 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!christo
>From: christo@psych.toronto.edu (Christopher Green)
Subject: Re: Chinese room miscellanea
Organization: Department of Psychology, University of Toronto
References: <1992Mar11.231804.13992@bronze.ucs.indiana.edu>
Message-ID: <1992Mar12.212000.6784@psych.toronto.edu>
Date: Thu, 12 Mar 1992 21:20:00 GMT

In article <1992Mar11.231804.13992@bronze.ucs.indiana.edu> chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:
>Christopher Green writes:
>
>>(Actually, the only functionalist
>>I know who is actually this strong is Dave Chalmers and John McCarthy.
>>Even Fodor defers to ignorance when it comes to the implications of
>>functionalism for, say, qualia.)
>
>Alas, you don't meet the right people then.  Any number of people, in AI
>and philosophy, hold just this view.
[...]
>Any number of others, from Shoemaker to Dennett, think that functionalism
>can provide a good account of qualia; most of them are much more sanguine
>about it than I am.  

Shoemaker I don't know much about. As for Dennett, I'm not sure that "Quining"
counts as a "good account". More like the repudiation of a need from an account.

>And of course most people in philosophy have kept
>the issues of "understanding" and qualia entirely separate; personally,
>I find it much easier to be a functionalist about beliefs, say, than
>about qualia.

"Ease" isn't really the issue. It's the logical implications of your
beliefs that's crucial. As you can see in the discussion being held here,
all kinds of people are denying being "strong" AI-ists, when it's simply
entailed by their stated beliefs.
>
>>The man in the Room (consciously) memorizes all of the rules and the
>>shapes of all the symbols.  Then he (consciously) implements those
>>rules in attempting to construct Chinese answers to the Chinese questions
>>he receives.  In doing this, he satisfies the requirements of being
>>a Turing machine.  Becasue his answers are indistinguishible from those
>>that would be given by a native Chinese speaker, he also passes the Turing
>>test. We are now, under the TT, expected to say that he understands
>>Chinese.
>
>(1) For the zillionth time, the Turing test is not essential to the
>strong AI hypothesis.  (2)  According to strong AI, *implementations*
>of *systems* (or of programs, or of FSAs, or whatever) are the subjects
>of understanding.  In this situation we have two distinct implemented
>systems.  

For the zillion-and-first time, I don't buy this. It's an ad hoc cop out.
I guess you're right about this debate. I'll stop if you will.

-- 
Christopher D. Green                christo@psych.toronto.edu
Psychology Department               cgreen@lake.scar.utoronto.ca
University of Toronto
---------------------


