Newsgroups: comp.ai.nat-lang,alt.cyberspace,alt.internet,alt.net-scandal,comp.ai,comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!swrinde!pipex!uknet!festival!edcogsci!jeff
From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Subject: Re: Eliza (was Re: Are there non-humans lurking on Internet/Usenet?)
Message-ID: <D3oxrJ.DHu@cogsci.ed.ac.uk>
Sender: usenet@cogsci.ed.ac.uk (C News Software)
Nntp-Posting-Host: bute-alter.aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
References: <3h2qas$m4f@percy.cs.bham.ac.uk> <3h3flr$lhl@crl4.crl.com> <vlsi_libD3nBn4.ILH@netcom.com>
Date: Wed, 8 Feb 1995 16:57:18 GMT
Lines: 43
Xref: glinda.oz.cs.cmu.edu comp.ai.nat-lang:2822 comp.ai:27241 comp.ai.philosophy:25331

In article <vlsi_libD3nBn4.ILH@netcom.com> vlsi_lib@netcom.com (Gerard Malecki) writes:
>In article <3h3flr$lhl@crl4.crl.com> dbennett@crl.com (Andrea Chen) writes:
>>
>>Eliza totally horrified its author.  If one went along with the game, it
>>made a plausible imitation of a Rogerian therapist who essentially
>>sits back and lets people wander.  And people did wander,  they confessed
>>to it things that they never told anyone.  They thought it was "real".
>>Some people used it to "prove" that soon we would have "computerized
>>therapy".  As a result Wiezembaum (if I remmeber the authors name
>>correctly) moved from a leading light in AI to one of its bitterst
>>enemies.

It's still sometimes difficult to convince people who "ought to
know better" (a.g. AI phd students) that Eliza does not have "some"
understanding.  Indeed, I'm pretty sure that the people in comp.ai.phil
can come up with a number of arguments to support that view (ie, that
Eliza has some understanding).  Whether they can come up with better
arguements against the view is less clear.

>But Eliza was pretty dumb and never made any attempt to semantically
>analyze the conversation, except in the crudest sense, and limited to the
>last question posed and not the overall conversation till that point.

That may have been true of the original version.  I no longer
remember.  But some versions did a bit more.  For instance, if
you said "... my X ...", they would remember this and then,
if the conversation reached a point where they had no interesting
response to the most recent input, they would say something about
"your X".

>But both hardware and software have matured since the times of Eliza. A
>modern conversation program should have a sufficiently large database
>on a wide variety of topics, rules governing behavior and ethics, judgement
>and discretion. It should also know 'normal' English usage, so that its
>replies are not overly pedantic or detailed to the point of boredom.

There's at least one program on the WWW that you might want to try.
See <A HREF="http://www.uio.no:80/~mwatz/c-g.writing/"> Computer
Generated Writing</A>, and follow the link to "Julia the Chatterbot".
There you'll also find some discussion of the general problem of
trying to fool a human.

-- jd
