From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!cis.ohio-state.edu!rutgers!rochester!yamauchi Sun Dec  1 13:06:27 EST 1991
Article 1731 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!cis.ohio-state.edu!rutgers!rochester!yamauchi
>From: yamauchi@cs.rochester.edu (Brian Yamauchi)
Newsgroups: comp.ai.philosophy
Subject: Re: A Behaviorist Approach to AI Philosophy
Message-ID: <YAMAUCHI.91Nov28161315@indigo.cs.rochester.edu>
Date: 28 Nov 91 21:13:15 GMT
References: <YAMAUCHI.91Nov24030039@magenta.cs.rochester.edu> <5727@skye.ed.ac.uk>
	<YAMAUCHI.91Nov27203011@magenta.cs.rochester.edu> <5739@skye.ed.ac.uk>
Sender: yamauchi@cs.rochester.edu (Brian Yamauchi)
Organization: University of Rochester
Lines: 42
In-Reply-To: jeff@aiai.ed.ac.uk's message of 28 Nov 91 18:09:18 GMT
Nntp-Posting-Host: indigo.cs.rochester.edu

In article <5739@skye.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>In article <YAMAUCHI.91Nov27203011@magenta.cs.rochester.edu> yamauchi@cs.rochester.edu (Brian Yamauchi) writes:
>>In article <5727@skye.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>>>In article <YAMAUCHI.91Nov24030039@magenta.cs.rochester.edu> yamauchi@cs.rochester.edu (Brian Yamauchi) writes:
>>
>>>>Suppose AI researchers could build a robot that was indistinguishable
>>>>from a human being in every way -- sensorimotor behavior, language
>>>>abilities, learning and reasoning powers, even physical appearance.
>>>>Would you argue that this robot is incapable of consciousness simply
>>>>because it was the product of human engineering rather than mutation
>>>>and natural selection?

>No.  Nor would Searle.  The Searle complaint about computers is
>not that they're man-made.

>>I've included those attributes that I consider most relevant to
>>"thinking".  Are you arguing that I have omitted relevant attributes
>>or included irrelevant ones?  If the former, what else would you
>>suggest?  If the latter, what difference does it make?

>Once you start omitting attributes, it depends on what you omit.
>No one knows exactly what attributes are required.

Exactly -- and this is true of humans as well.  Yet, we can "omit
attributes" from humans and still judge them to be conscious.  For
example: we consider as conscious people who are blind, deaf,
paraplegic, quadraplegic, and have artificial limbs or organs.  We
even consider people who are severely retarded to be conscious.

Why?  Because their behavior is more or less similar to what we expect
from other conscious beings, meaning it is -- broadly speaking -- more
or less similar to our own, and the one thing we do know is that we
ourselves are conscious.  (Or maybe that should have been "I" rather
than "we" :-)

Only in the most extreme cases of absence of intelligent behavior
(i.e. mental vegetables, people in comas) do we decide that these
individuals are not conscious.

Since we use a behavioral definition for ascribing consciousness to
humans, why not use a behavioral definition for ascribing
consciousness to machines?


