From newshub.ccs.yorku.ca!torn!utcsri!rpi!uwm.edu!ogicse!psgrain!ee.und.ac.za!ucthpx!casper.cs.uct.ac.za!nhorne Wed Oct 14 14:58:13 EDT 1992
Article 7172 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!utcsri!rpi!uwm.edu!ogicse!psgrain!ee.und.ac.za!ucthpx!casper.cs.uct.ac.za!nhorne
>From: nhorne@casper.cs.uct.ac.za (N E Horne)
Newsgroups: comp.ai.philosophy
Subject: Re: Dualism
Message-ID: <BvtnEt.5Dz@casper.cs.uct.ac.za>
Date: 8 Oct 92 21:23:12 GMT
Article-I.D.: casper.BvtnEt.5Dz
References: <1992Oct6.171057.26199@oracorp.com>
Organization: Computer Science Department, University of Cape Town
Lines: 34

In <1992Oct6.171057.26199@oracorp.com> daryl@oracorp.com (Daryl McCullough) writes:


>While it may be true that there could be a program that behaved as
>though intelligent yet was not, ELIZA is not an example. Behaving as
>if intelligent means making responses that are appropriate for an
>intelligent being in all possible circumstances (for all possible
>input/output histories, at least). The trickery involved in ELIZA is
>that it behaves acceptably for a shrewdly guessed set of inputs, a set
>that is likely to be the first things said to "her". If you try to go
>beyond this very small set and say unexpected things, ELIZA's
>responses quickly degenerate into nonsense. Taking a look at ELIZA's
>code shows you the limitations of "her" abilities to respond
>intelligently.

>The assumption behind behaviorist AI is not that "If you can fool
>someone into thinking a program is intelligent, then it is
>intelligent", but "If the program behaves like an intelligent person
>in all possible circumstances, then it is intelligent". While there
>may be philosophical arguments against this position, people's
>gullibility when interacting with ELIZA is no argument.

Arguments on both sides of the AI debate certainly are concerned with the
gullibility of ELIZA's victims. For one: gullibility (which comes in degrees) 
is limited in a similar way that ELIZA's ability to anticipate our inputs
is. Perhaps the problem is not that ELIZA is not sophisticated enough to cover
a sufficiently broad scope of human-generated inputs intelligently, but that 
humans are not suitably advanced to tease the limits of human genius. When 
confronted by an (admittedly hypothetical) interogator of suitable sophistication our own psychotherapists may reduce to little more than ELIZAs with larger
vocabularies, better guesses, and advanced conceptual schemes ... if they 
don't already.


Neil Horne


