From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!cis.ohio-state.edu!rutgers!rochester!yamauchi Sun Dec  1 13:06:09 EST 1991
Article 1701 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!cis.ohio-state.edu!rutgers!rochester!yamauchi
>From: yamauchi@cs.rochester.edu (Brian Yamauchi)
Newsgroups: comp.ai.philosophy
Subject: Re: A Behaviorist Approach to AI Philosophy
Message-ID: <YAMAUCHI.91Nov27203011@magenta.cs.rochester.edu>
Date: 28 Nov 91 01:30:11 GMT
References: <YAMAUCHI.91Nov24030039@magenta.cs.rochester.edu> <5727@skye.ed.ac.uk>
Sender: yamauchi@cs.rochester.edu (Brian Yamauchi)
Organization: University of Rochester
Lines: 41
In-Reply-To: jeff@aiai.ed.ac.uk's message of 27 Nov 91 20:18:47 GMT
Nntp-Posting-Host: magenta.cs.rochester.edu

In article <5727@skye.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>In article <YAMAUCHI.91Nov24030039@magenta.cs.rochester.edu> yamauchi@cs.rochester.edu (Brian Yamauchi) writes:

>>Suppose AI researchers could build a robot that was indistinguishable
>>from a human being in every way -- sensorimotor behavior, language
>>abilities, learning and reasoning powers, even physical appearance.
>>Would you argue that this robot is incapable of consciousness simply
>>because it was the product of human engineering rather than mutation
>>and natural selection?
>>
>>If so, why?
>>
>>If not, then it seems that any proof of the impossibility of "strong
>>AI" (broadly defined) requires a convincing argument why it is
>>impossible in theory to build such an anthropomorphic robot.

>The problem here is that you first say "in every way" and then list
>a small number of things.  If you really do mean in every way, if
>you cut it it bleeds, etc, then its relevance to such questions as
>whether computers can think is, at best, obscure.

Does it make any difference whether it bleeds or not?  I've included
those attributes that I consider most relevant to "thinking".  Are you
arguing that I have omitted relevant attributes or included irrelevant
ones?  If the former, what else would you suggest?  If the latter,
what difference does it make?

For the sake of argument consider three cases:

Case I:  A robot absolutely indistinguishable from human.

Case II: A robot behaviorally indistinguishable from human, and
physically distinguishable only through surgery or dissection.

Case III: A robot behaviorally indistinguishable from human, similar
in physical capabilities and structure (bipedal, two dextrous
arms/hands, stereo vision/hearing, etc.), but very different in
appearance (e.g. kevlar and titanium rather than skin and bones).

How would you answer the question (regarding consciousness) for each
of these cases?


