From newshub.ccs.yorku.ca!torn!cs.utexas.edu!usc!rpi!scott.skidmore.edu!psinntp!psinntp!dg-rtp!sheol!throopw Sat Oct 24 20:44:55 EDT 1992
Article 7379 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!usc!rpi!scott.skidmore.edu!psinntp!psinntp!dg-rtp!sheol!throopw
>From: throopw@sheol.UUCP (Wayne Throop)
Newsgroups: comp.ai.philosophy
Subject: solely on account
Summary: I can sort of see where Searle is coming from...
Message-ID: <719720425@sheol.UUCP>
Date: 22 Oct 92 00:05:11 GMT
References: <26892@castle.ed.ac.uk>
Lines: 39

: From: cam@castle.ed.ac.uk (Chris Malcolm)
: Message-ID: <26892@castle.ed.ac.uk>
: He [..Searle..]
: is saying more than that. He explicitly says that there is no reason
: why a digital computer should not be able to think, but that, if it
: should really think, it will not be solely on account of the software it
: is running.

Well... there is a way in which I agree with Searle here.  The thought
that the computer presumably accomplishes is not "solely on account" of
the software it it running, because software is an abstract, occuring
only in the minds of beholders.  The computer does what it does, and
presumably thinks what it thinks, because of physical processes
occuring within it.  (One might even say "causal powers" if one could
avoid the attendant nausea.) The fact that these physical processes were
designed to implement a process described by some software is, in my
opinion, irrelevant.

But note that from this perspective, NOTHING that ANY computer does is
"solely on account" of the software it is running, so this insight
seems trivial to me.

So, the question becomes, is any physical process that realizes
the process described by that software capable of thought?  Remember,
there may be many, many such physical processes.  I think Searle's
position is "obviously not, the CR provides such a case".  (I disagree
that the CR is actually certain to be such a case, of course.)

Here's where Harnad's "transduction" comes in, perhaps.  If the physical
process in question has an interface "like ours", that is, the
entity/environment boundary is humaniform, THEN it is (probably) capable
of thought, otherwise (maybe) not.  But the whole notion of
entity/environment boundary is so arbitrary and slippery, such a
distinction doesn't seem worth pursuing to me. 

( But, sigh, I may still be misunderstanding Harnad's position
  at least, and maybe even Searle's.  )
--
Wayne Throop  ...!mcnc!dg-rtp!sheol!throopw


