From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!mips!swrinde!cs.utexas.edu!uunet!mcsun!uknet!edcastle!aiai!jeff Sun Dec  1 13:06:02 EST 1991
Article 1688 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!mips!swrinde!cs.utexas.edu!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: A Behaviorist Approach to AI Philosophy
Message-ID: <5727@skye.ed.ac.uk>
Date: 27 Nov 91 20:18:47 GMT
References: <YAMAUCHI.91Nov24030039@magenta.cs.rochester.edu>
Reply-To: jeff@aiai.UUCP (Jeff Dalton)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 19

In article <YAMAUCHI.91Nov24030039@magenta.cs.rochester.edu> yamauchi@cs.rochester.edu (Brian Yamauchi) writes:

>Suppose AI researchers could build a robot that was indistinguishable
>from a human being in every way -- sensorimotor behavior, language
>abilities, learning and reasoning powers, even physical appearance.
>Would you argue that this robot is incapable of consciousness simply
>because it was the product of human engineering rather than mutation
>and natural selection?
>
>If so, why?
>
>If not, then it seems that any proof of the impossibility of "strong
>AI" (broadly defined) requires a convincing argument why it is
>impossible in theory to build such an anthropomorphic robot.

The problem here is that you first say "in every way" and then list
a small number of things.  If you really do mean in every way, if
you cut it it bleeds, etc, then its relevance to such questions as
whether computers can think is, at best, obscure.


