Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!oitnews.harvard.edu!yale!zip.eecs.umich.edu!newshost.marcam.com!news.mathworks.com!gatech!news-feed-1.peachnet.edu!news.netins.net!internet.spss.com!markrose
From: markrose@spss.com (Mark Rosenfelder)
Subject: Re: Turing's Playful Games (+Eliza+Therapy)
Message-ID: <D80ss8.A3s@spss.com>
Sender: news@spss.com
Organization: SPSS Inc
References: <3k4iub$p8n@oahu.cs.ucla.edu> <3nhj4g$i12@percy.cs.bham.ac.uk> <D7rpGJ.B5r@spss.com> <3ntqa4$bpi@percy.cs.bham.ac.uk>
Date: Wed, 3 May 1995 20:55:17 GMT
Lines: 72

Kudos for braving the storm of nonsense in comp.ai.philo...

In article <3ntqa4$bpi@percy.cs.bham.ac.uk>,
Aaron Sloman <A.Sloman@cs.bham.ac.uk> wrote:
>markrose@spss.com (Mark Rosenfelder) writes:
>> I don't doubt that Turing's Imitation Game could be turned into a
>> scientific test, by supplying detailed answers to such questions;
>> my point was that till this is done, it's not a scientific test.
>
>OK. I have no quarrel with that.
>
>I have no reason to believe Turing was trying to design a
>`scientific test'. He was (a) doing some interesting and provocative
>thought experiments (b) making some loosely specified predictions
>about what could be achieved before the end of the century,
>(c) trying to refute objections by indicating some of the sort of
>work that might have to be done.

I quite agree.  As you go on to say, people often inflate the TT into
something it's not; that was part of why I was concerned to show that 
it isn't a scientific test.

>> >I think Turing made a pretty shrewd guess about how difficult progress
>> >in AI would be, and what might be achieved in 50 years or so.
>(mark)
>> I'd say he was highly optimistic.  We are already 45 years into the time
>> period, and I don't think we yet have a program that could fool the average
>> investigator 30% of the time, compared to a human, in 5 minutes.
>
>Interesting conjecture. Is that based on any sort of experimental
>observation or survey of AI programs?
>
>I have used a Pop-11 based variant of Eliza in `open days' here at
>Birmingham University, and before that at Sussex University. The
>code, which runs in Poplog Pop-11, and uses no AI techniques at all,
>only a simple pattern matcher and lots of `canned text', is
>accessible as:
>    ftp://ftp.cs.bham.ac.uk/pub/dist/poplog/lib/elizaprog.p
>
>My informal observations of the reactions of many visitors (mostly
>teenagers but also some older people) suggests that nearly all
>`average' people (not trained investigators) could be fooled for 5
>minutes or even longer, into thinking they are communicating with a
>person.

Notice that I said "compared to a human".  That people can take Eliza 
for a human is well known (hard to believe-- I've played with Eliza too--
but well known).  That doesn't mean that if asked to talk to Eliza for five 
minutes and to a human for five minutes they couldn't tell the difference.

>Well, as it happens I have a relevant story to tell. Someone I know
>had several years of psychoanalysis, some time ago. Recently I saw
>her for the first time for many years, and we were chatting about
>AI. She then reminded me of the occasion, a long time ago, when I
>let her play with (an earlier version of) the Pop-11 Eliza, which
>was in the middle of her therapy period. She now claims that it had
>a profound effect on her, because she could not find any significant
>difference between talking to Eliza and talking to her therapist:
>she concluded that he was was just following an Eliza-like program,
>and she lost her respect for him.

A nice story; but on your own showing the woman decided that the 
psychologist was machine-like, not that the program was human-like.

The discovery that it doesn't take much intelligence to talk to and even 
help a patient in certain kinds of therapy is interesting; but I don't think 
it tells us much about intelligence in general.  The techniques used for
Eliza don't scale up in any interesting way (IMHO).  

(The infamous humongous lookup table is presumably the logical extension
of Eliza's approach.  Constructing one would pass Turing's test, but
tell us nothing of interest about human or artificial cognition.)
