Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!europa.chnt.gtegsc.com!howland.reston.ans.net!vixen.cso.uiuc.edu!news.uoregon.edu!news.dacom.co.kr!news.netins.net!internet.spss.com!markrose
From: markrose@spss.com (Mark Rosenfelder)
Subject: Re: Chinese Room debunked
Message-ID: <DAsz4G.40D@spss.com>
Sender: news@spss.com
Organization: SPSS Inc
References: <3s9vdq$bru@news.tamu.edu> <3sceu9$4d6@nntp5.u.washington.edu> <DALH90.7oM@spss.com> <3sd7e6$kpa@nntp5.u.washington.edu>
Date: Mon, 26 Jun 1995 23:12:15 GMT
Lines: 94

In article <3sd7e6$kpa@nntp5.u.washington.edu>,
Gary Forbis  <forbis@cac.washington.edu> wrote:
>markrose@spss.com (Mark Rosenfelder) writes:
>|> Gary Forbis  <forbis@cac.washington.edu> wrote:
>|> >Surely no one is claiming that all implementations of the
>|> >idealized computer runnning the right program have minds.  Such a believe
>|> >would abandon materialism completely.  (where "implementation" means any 
>|> >physical system that can map to the formal language.)
>|> The reference to materialism seems confused to me.  In what way does
>|> implementation-independence introduce immaterial substances?  
>
>Are you going to make me fall back to a weaker claim?  Materialism holds that
>"all being and processes and phenomena can be explained as manisfestations or
>results of matter." (webster's 10th)
>
>I'm not sure I claimed implementation independence introduced immaterial 
>substances.  

Not directly; but you talked about abandoning materialism, which I took as 
equivalent to embracing dualism (or some higher-numbered ism).  Your 
clarification is indeed a weaker claim:

>Rather I was suggesting that implementation independence 
>introduces material indeterminacy.  [a] How can we deduce we are made of 
>hydorcarbon molecules if there are no mental differences between being made of
>hydrocarbon molecules and silicon molecules?  [b] Why even propose there are 
>these different substances unless there is some phenomenological difference?  

These questions seem pretty easy to me, so I am probably missing the source
of metaphysical anxiety here.  The usual answer to [a] is to send some
material out to the lab and see what they come up with.  One could express
some Zen-like skepticism about whether examining someone else's brain really 
determines the nature of our own, but I don't think that's the problem.  As for 
[b], we distinguish carbon from silicon because it's overwhelmingly useful in 
physics and chemistry, whether or not it helps any in cognitive science.

>It's hard for me to believe that the matierial most intimate to me is 
>inconsequential to my mental existance but more remote material has effects.

It's not exactly inconsequential: remove it, and your mental existence stops.
Introduce some specific physical problems-- a brain tumor, a lack of oxygen,
an electrical stimulus, a deficit of dopamine, a sharp blow to the head--
and there will be cognitive effects.  An emergence-based account of cognition
doesn't deny any of this, any more than a model of a PC as a von Neumann 
machine denies that strong electrical fields or a spilled can of Coke
affect the machine's computation.

What the strong AI hypothesis does claim is that to have a mind at all
you don't need organic substances; anything that will support a program of the
required complexity will do.  I take it this is what you find hard to swallow.  
Perhaps you could examine your intuitions more deeply, and say *why* you find 
it hard to swallow?  

For me, the opposite thesis is the one I find non-intuitive.  I can't imagine 
anything about hydrocarbons that leads directly to minds, or about silicon
that prevents them.  

>|> We have plenty of examples of things not tied to any particular physical
>|> realization which don't threaten materialism.  Money, for instance: if I have 
>|> $10, it may be physically realized as a piece of paper, a pile of coins, a 
>|> scribbled IOU, or an entry in a bank's database.  We'd look askance at
>|> someone who claimed that treating all these manifestations as money requires
>|> one to "abandon materialism completely".
>
>What's being proposed is that there is a formal system that defines me such
>that I cannot tell the difference between being implemented as a brain
>in a body in the world and being implemented as a program interacting with
>other programs in a computer.  How does this differ from solipsism?  

Well, to give another infuriatingly naive answer, when the results come back
from the lab, the solipsist (but not the materialist) refuses to believe them.

Let's look at it another way.  I'm wary of completely abstract theories of
intelligence; I think minds are best explained as biological devices for
facilitating the complex interactions of an organism with the real world 
(including other organisms as complex as itself).  As such I expect that the
way *our minds* work is very closely tied to our biology and our evolutionary
history.  On the other hand I don't generalize this to *minds in general*.
An alien or a robot mind might have many differences in detail from our 
own, both mechanically and introspectively, yet still be such that it
would be mere prejudice not to call it a mind.

You find it unlikely, it seems, that "a program interacting with other 
programs" could be introspectively indistinguishable from a human mind.
I do too; not because I find that minds must be based on hydrocarbons,
but because simulating the real world in enough detail to raise such an
epistemological dilemma seems almost impossible in practical terms.

A better example would be a robot, build to live in the same world we do.
Even here I'd think it would be possible to *distinguish* between being a 
robot and being a human.  (Stick a pin in your hand and see if blood comes 
out.)  However, the interesting question for me is not whether a robot
indistinguishable from a human could be created; but whether we could 
create a robot which we'd be willing to say has a mind.
