From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!qt.cs.utexas.edu!cs.utexas.edu!uunet!mcsun!uknet!edcastle!aiai!jeff Sun Dec  1 13:06:36 EST 1991
Article 1748 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!qt.cs.utexas.edu!cs.utexas.edu!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: A Behaviorist Approach to AI Philosophy
Message-ID: <5754@skye.ed.ac.uk>
Date: 29 Nov 91 16:57:37 GMT
References: <YAMAUCHI.91Nov24030039@magenta.cs.rochester.edu> <5727@skye.ed.ac.uk> <YAMAUCHI.91Nov27203011@magenta.cs.rochester.edu> <5739@skye.ed.ac.uk> <YAMAUCHI.91Nov28161315@indigo.cs.rochester.edu>
Reply-To: jeff@aiai.UUCP (Jeff Dalton)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 34

In article <YAMAUCHI.91Nov28161315@indigo.cs.rochester.edu> yamauchi@cs.rochester.edu (Brian Yamauchi) writes:
>In article <5739@skye.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:

>Exactly -- and this is true of humans as well.  Yet, we can "omit
>attributes" from humans and still judge them to be conscious. 

It matters which attributes we omit.  No one knows exactly what
attributes are required.  If Searle's argument is correct, it shows
that they must amount to more than merely instantiating the right
computer program.  To refute Searle you have to show that all
that's required is to instantiate the right program.

Moreover, if Searle is right, having the right behavior isn't
enough.  Maybe we thought it was enough before Searle came along,
but now we have to wonder.  Once we start to wonder, we have
to consider the possibility that it matters how the behavior is
produced.  Once we consider that possibility, we can't dismiss
it by saying "we didn't think it mattered before Searle came along".

>Since we use a behavioral definition for ascribing consciousness to
>humans, why not use a behavioral definition for ascribing
>consciousness to machines?

I have discussed this at length in other articles.  In the case of
humans it's more than behavior that we have in common.  Indeed, we
judge humans to be conscious even when most of the behavior is
missing.  The reason not to use the behavioral definition for
machines is (1) machines may produce the bahavior in a way that
is different from how humans do it, (2) it might turn out that 
those differences matter, so that we'll have good reasons for
deciding that what machines do is "just a trick", and (3) we
don't yet know enough to rule out (2).

-- jd


