From newshub.ccs.yorku.ca!torn!cs.utexas.edu!sun-barr!ames!agate!doc.ic.ac.uk!uknet!edcastle!cam Thu Oct  8 10:11:31 EDT 1992
Article 7143 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!sun-barr!ames!agate!doc.ic.ac.uk!uknet!edcastle!cam
>From: cam@castle.ed.ac.uk (Chris Malcolm)
Newsgroups: comp.ai.philosophy
Subject: Re: Dualism
Message-ID: <26643@castle.ed.ac.uk>
Date: 7 Oct 92 16:16:40 GMT
References: <1992Oct6.171057.26199@oracorp.com>
Organization: Edinburgh University
Lines: 24

In article <1992Oct6.171057.26199@oracorp.com> daryl@oracorp.com (Daryl McCullough) writes:
>In article <26534@castle.ed.ac.uk>, cam@castle.ed.ac.uk (Chris
>Malcolm) writes:

>>Many years ago to entertain some schoolkids I wrote a BASIC program...

>The assumption behind behaviorist AI is not that "If you can fool
>someone into thinking a program is intelligent, then it is
>intelligent", but "If the program behaves like an intelligent person
>in all possible circumstances, then it is intelligent". While there
>may be philosophical arguments against this position, people's
>gullibility when interacting with ELIZA is no argument.

Your comments are perfectly correct but irrelevant.

The people in question were were _schoolkids_. The point of the
exercise was to demonstrate to them their own gullibility. And the
point of my posting was to query someone whose rather original
definition of "conscious" seemed to include such a simple device as
this trivial program.
-- 
Chris Malcolm    cam@uk.ac.ed.aifh          +44 (0)31 650 3085
Department of Artificial Intelligence,    Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, UK                DoD #205


