From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!news.cs.indiana.edu!mips!swrinde!zaphod.mps.ohio-state.edu!uwm.edu!linac!uchinews!spssig.spss.com!markrose Wed Apr 22 12:04:07 EDT 1992
Article 5153 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!news.cs.indiana.edu!mips!swrinde!zaphod.mps.ohio-state.edu!uwm.edu!linac!uchinews!spssig.spss.com!markrose
>From: markrose@spss.com (Mark Rosenfelder)
Newsgroups: comp.ai.philosophy
Subject: Re: The Challenge
Message-ID: <1992Apr20.172047.36128@spss.com>
Date: 20 Apr 92 17:20:47 GMT
References: <1992Apr16.134520.6283@oracorp.com>
Organization: SPSS Inc.
Lines: 52
Nntp-Posting-Host: spssrs7.spss.com

In article <1992Apr16.134520.6283@oracorp.com> daryl@oracorp.com (Daryl McCullough) writes:
>markrose@spss.com (Mark Rosenfelder) writes:
[not this paragraph, which is Jeff Dalton's, but the next one]
>>>(NB my position is that the CR fails to show the impossibility
>>>of strong ai but that it's useful nonetheless, in part because
>>>it shows we should question the Turing Test.)
>
>> If you want a philosophical objection to the Turing Test I don't see
>> how you can beat the roomful of monkeys.
>
>Why do you say that?  A room full of monkeys will, with very high
>probability, not pass the Turing Test. 

Plausibility and probability are nice notions but when doing philosophy we
like them to keep their distance...

Why do you object to the roomful of monkeys as improbable?  You can't get
much more improbable than the humongous lookup table, but that's been
seriously discussed in this forum.  (It's true that the monkey test is
not likely to be repeatable, but back when we were discussing the lookup
table you argued that it didn't have to be repeatable either.)

The advantage of the roomful of monkeys (as opposed to the Monkey Room,
in which Searle executes a program which simulates a monkey) is that it
very cleanly disposes of the possibility that the Turing Test can give
*certain* evidence of the existence of understanding.  The humongous lookup
table is not so clean, since some people maintain that the damn thing does
understand, and others (e.g. myself) say that it's not really a form of
non-human intelligence.

>Of course, there is the tiny
>probability that it will pass, but that is not a problem specifically
>for the Turing Test; *any* physically performable test has a nonzero
>probability of giving the wrong answer (unless it is a test that, by
>definition, everything passes, or nothing passes). That might raise
>philosophical problems with empirical science, in general, but it
>doesn't raise problems for the Turing Test in particular.

OK.  But it raises problems for a theory of intelligence which *solely*
depends on the Turing Test.  A theory which relied on some other tests as
well would be more secure.

>On the other hand, if there were a convincing argument that the CR was
>possible and that it was incapable of understanding, then that would
>really raise problems for the Turing Test, since it would suggest that
>there are two classes of beings, conscious beings and pseudo-conscious
>beings, that could not be distinguished by their behavior (even
>statistically).

True.  However, the poster I was responding to had already stated that 
the CR does not disprove strong AI.  And if it doesn't, it doesn't cause
problems for the Turing Test, tho' something else might.


