From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!cis.ohio-state.edu!rutgers!rochester!yamauchi Sun Dec  1 13:06:39 EST 1991
Article 1754 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!cis.ohio-state.edu!rutgers!rochester!yamauchi
>From: yamauchi@cs.rochester.edu (Brian Yamauchi)
Newsgroups: comp.ai.philosophy
Subject: Re: A Behaviorist Approach to AI Philosophy
Message-ID: <YAMAUCHI.91Nov29151342@magenta.cs.rochester.edu>
Date: 29 Nov 91 20:13:42 GMT
References: <YAMAUCHI.91Nov24030039@magenta.cs.rochester.edu> <5727@skye.ed.ac.uk>
	<YAMAUCHI.91Nov27203011@magenta.cs.rochester.edu> <5739@skye.ed.ac.uk>
	<YAMAUCHI.91Nov28161315@indigo.cs.rochester.edu> <5754@skye.ed.ac.uk>
Sender: yamauchi@cs.rochester.edu (Brian Yamauchi)
Organization: University of Rochester
Lines: 39
In-Reply-To: jeff@aiai.ed.ac.uk's message of 29 Nov 91 16:57:37 GMT
Nntp-Posting-Host: magenta.cs.rochester.edu

In article <5754@skye.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>In article <YAMAUCHI.91Nov28161315@indigo.cs.rochester.edu> yamauchi@cs.rochester.edu (Brian Yamauchi) writes:
>>In article <5739@skye.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:

>>Exactly -- and this is true of humans as well.  Yet, we can "omit
>>attributes" from humans and still judge them to be conscious. 

>It matters which attributes we omit.  No one knows exactly what
>attributes are required.  If Searle's argument is correct, it shows
>that they must amount to more than merely instantiating the right
>computer program.

>Moreover, if Searle is right, having the right behavior isn't
>enough.

True, but as many, many people have pointed out -- Searle isn't right.
I still have yet to see a convincing rebuttal to the "Systems Reply".

>>Since we use a behavioral definition for ascribing consciousness to
>>humans, why not use a behavioral definition for ascribing
>>consciousness to machines?

>The reason not to use the behavioral definition for
>machines is (1) machines may produce the bahavior in a way that
>is different from how humans do it, (2) it might turn out that 
>those differences matter, so that we'll have good reasons for
>deciding that what machines do is "just a trick", and (3) we
>don't yet know enough to rule out (2).

It depends in what way "those differences matter".  In my opinion,
they only matter if they result in differences in behavior.

Using this line of reasoning, and lacking a full understanding of the
human brain, one could argue that it is possible that the brains of
men and women produce intelligent behavior using different mechanisms.
And from (2) and (3), one could argue that these differences might
matter.  Therefore, carrying this argument to its logical extreme, one
should consider the issue of consciousness in members of the opposite
sex to be an open question...


