Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!pipex!uknet!festival!edcogsci!jeff
From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Subject: Re: Bag the Turing test (was: Penrose and Searle)
Message-ID: <D0K22K.B0D@cogsci.ed.ac.uk>
Sender: usenet@cogsci.ed.ac.uk (C News Software)
Nntp-Posting-Host: bute.aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
References: <1994Dec5.152724.10065@oracorp.com> <D0ELL3.9xt@spss.com>
Date: Fri, 9 Dec 1994 18:01:31 GMT
Lines: 76

In article <D0ELL3.9xt@spss.com> markrose@spss.com (Mark Rosenfelder) writes:
>In article <1994Dec5.152724.10065@oracorp.com>,
>Daryl McCullough <daryl@oracorp.com> wrote:

>>The people who say they wouldn't be convinced by a computer that
>>passes the Turing Test are just not being honest with themselves. 

Why can't skeptics about the TT just be wrong?  Why "not honest
with themselves"?  The suggestion, here as in many other instances,
seems to be that no one could have good reasons for being skeptical.

>>It's is one thing to give intellectual, Searle-style arguments as to why a
>>hypothetical TT-passing program doesn't really understand, and it is
>>quite another to actually *meet* such a program, and dismiss it. I am
>>willing to bet that there is not a single person on this newsgroup who
>>would not come to accept a program as intelligent and conscious if the
>>program were capable of carrying on a lively, insightful discussion
>>about politics, morality, love, family and artificial intelligence.

Whether I would accept it or not would depend on what else I knew
about the program and on what had been discovered about consciousness, 
programming, etc, between now and when I meet this program.

>It's a fascinating question, how people would really react to intelligent
>TT passers.  I am going by what anti-AI writers claim they would do; 
>you're sure that faced with the real thing, their skepticism would vanish.

Well, there are a number of cases we might imagine.  For instance,
Daryl McCullough comes to me and says "Guess what, Jeff, I've
developed this great program that passes the TT, and here's how
it works: ...".  Now, maybe after hearing this I would think
"Of course!  That would produce consciousness, understanding, etc.
Why did no one think of it before?"

Or maybe I'd think "Humm.  Maybe.... I'll have to think about it."
Or "Don't think so.  I don't think there's any consciousness behind
the behavior."

Or perhaps I meet a program with no explanation of how it works.
This seems to be the case most TT-defenders prefer (and I take 
TT-defender in a rather broad sense).

Anyway, in that case I might well give it the benefit of the doubt.
But I don't think that's the only case worth considering.

>That may be-- it's hard to believe that Searle has really tried to 
>picture to himself what passing the TT really means-- but this conclusion
>may be defeated by human prejudice.  Humans are ready enough to treat 
>other members of their species as less than human; why should we expect
>them to treat AIs any better?

That's true, but there's also a prejudice that works the other way.
There's a strong tendency to see understanding, consciousness, etc
behind any sensible (and even not so sensible) use of words.

One sign of this appears in program design.  Since computers aren't
considered to be "language users" (instead, they work with numbers,
databases, etc) there's a tendency to overlook some possibilities for
presenting meaningful descriptions to the user.  For instance, many
people have to record the time they spend on various activities at
work.  In many cases, they have to explictily deal with numeric
designations for all the categories, even though it would be easy 
to provide more meaningful names.

A more direct sign is the reaction to even extremely simple programs
that manage to construct sentences or sequences that are sometimes
sort of like sentences.  I certainly find it easy to imagine
understanding behind such output, especially if I get to interact 
w/ the program by typing in some input and getting back a reply.
Eliza and other Doctor programs are example, though I happen not
to find them very convincing myself.  I've sometimes found it more
convincing to talk to a simple program I have that replies by randomly
chaining segments of past input (like an interactive "dissociated
press").

-- jeff
