From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Tue Apr  7 23:24:35 EDT 1992
Article 4966 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: The Challenge
Organization: Department of Psychology, University of Toronto
References: <6419@skye.ed.ac.uk> <1992Apr1.150750.9618@cs.yale.edu> <6742@pkmab.se>
Message-ID: <1992Apr7.223711.18902@psych.toronto.edu>
Keywords: Searle, Chinese Room
Date: Tue, 7 Apr 1992 22:37:11 GMT

In article <6742@pkmab.se> ske@pkmab.se (Kristoffer Eriksson) writes:
>In article <1992Apr1.150750.9618@cs.yale.edu> mcdermott-drew@CS.YALE.EDU (Drew McDermott) writes:
>>I have been mildly surprised by the reaction to my "challenge"
>>regarding the Chinese Room.  It turns out that no one is willing
>>actually to defend the argument.  Everyone actually wants to talk
>>about something else:
>>
>>  In article <6419@skye.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>>
>>  >If we're going to try to line up the arguments on both sides
>>  >(as I think McDermott suggests), let's do it for the Turing Test
>>  >too.  A defeat for the TT would make the entire discussion much
>>  >more reasonable, IMHO.
>>
>>Michael Gemar (in e-mail) suggested that the CR was not, after all,
>>the issue, but syntax vs. semantics was.  Christopher Green made the
>>same point, in a posting I seem to have misplaced.  
>
>Oh, so they know the final word on what the issue is, indeed?

Come, come, let's all be friendly here.  I was not attempting to make
a grand claim without argument.  As I have tried to demonstrate repeatedly
(but evidently to no avail) in this forum, the Chinese Room is merely
a *demonstration* of the formal *claim* that syntax doesn't yield semantics.
We can argue whether or not is it a good demonstration or not, but, as far
as I interpret it, that is the role it plays in Searle's argument.  As I have
suggested in an unpublished paper, the very "sexiness" of the example has
served to obscure the real issue, as everyone concentrates on the 
example, and not on the arugment itself.

If you have a different view, fine, argue for it.  I have for mine. 

>I might be willing to participate in a debate of the suggested kind that
>takes on a broader scope of this general issue. For instance, the above
>mentioned people could argue for the position that "computers couldn't
>possibly possess minds", using all arguments they find relevant, not
>limited to the Chinese Room argument, while I would take on the position
>"you're wrong, there is no proof that computers can't have minds".

I have no real anti-AI agenda, appearances to the contrary.  I am not
interested in simply marshalling all the arguments I can find against
the claim that computers could possess minds.  I *am* interested in 
discussing *issues*.

>I would not be willing to argue the position that "I can prove that
>computers can have minds", though, which is perhaps the position which
>the above mentioned people believe the rest of us are arguing from. Is
>there anyone at all here who would be brave/foolish enough to take on that
>position? I don't think it is possible to win with that position. Even
>if we could demonstrate a real computer with a "real" mind according to
>us (which we can't), it would still be as possible as ever to claim that
>"it's a computer - it can't be a real mind", or some lesser version
>there-of.

Again, I am not interested in "us vs. them" debates.  I am perfectly
willing to be open-minded - heck, I've even changed my view about the
sufficiency of Searle's response to the Systems Reply.  But if your
demonstrations can't hold up to reasoned argument, then they're not
very good demonstrations.

> An empirically verified explanation of how the human brains
>work (which we don't have either) would convert some people, I think, but
>not even that is water-tight. 

How do you empirically verify how the brain generates meaning?  The whole
debate is whether behaviour (i.e., Turing Test) is sufficient to indicate
a mind.  In this case, it seems to me empirical considerations don't enter
into it.  It is a matter of analysis.

> I think the problem to a great extent
>involves how we choose to define meaning and mind and other terms (i.e.,
>what attributes of these concept we take for granted, and base our further
>arguments on), and there simply is no way to "prove" a definition. Some
>definitions can be shown not to lead to the expected consequences, though.

I agree completely with all of the above, and I think this is the *only*
way to proceed on this issue.  This is why I started the discussion a
few months ago about "panpsychism", to explore what the full consequences
were of functionalism.  This can all be done by philosophical analysis -
indeed, I don't see how else it *can* be done.

>I also think that the failure even to agree on the _issue_ for this proposed
>more formal debate, provides the definite demonstration that people have been
>attributing the wrong positions to each other, and that there have been more
>positions involved in the debate than some have allowed for in their responses,
>as I have been trying to point out to some of the debaters, mostly unto deaf
>ears. The pro-Searle side have been interpreting most of their contenders as
>advocating Strong AI, ignoring the possibility that some of them may have
>been more concerned with just pointing out loop-holes in the anti-Strong AI
>arguments, and the anti-Searle side have perhaps mostly interpreted the
>pro-Searle side as primarily defending the Chinese Room, ignoring other
>points being made.

This may very well be the case, although the validity of one's argument
has no relation to what side one is on.

> Of course, sloppy arguments and misunderstandings abound
>on both sides, too, and I mean _both_ sides, but that's only to be expected
>in an open forum like this, I think, and is nothing to be upset about.

Ah, the wondrous chaos of USENET!

>As I said, I might be willing to participate in a debate with a wider
>scope, as suggested by the pro-Searle side. But if we now widen the issue
>from being "the Chinese Room" to "syntax vs. semantics", I would expect a
>significant risk that the Chinese Room will still be invoked rather quickly
>as one of the arguments against Strong AI, and then we'd be back at the
>same point again. Therefore, I have to ask: would the pro-Searle side (all
>of the participants, not only some) be willing to agree to leave the Chinese
>Room out of the argument, as being insufficient for proving the point? If
>they are not willing to do that, then I think we would still need to argue
>that out before going on to any wider questions.

I'd be willing to drop the Chinese Room in favor of a wider debate.  I can't,
however, speak for my "co-religionists".


- michael



