From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!uchinews!spssig.spss.com!markrose Thu Apr 16 11:34:00 EDT 1992
Article 5043 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!uchinews!spssig.spss.com!markrose
>From: markrose@spss.com (Mark Rosenfelder)
Subject: Re: syntax and semantics
Message-ID: <1992Apr10.165132.3584@spss.com>
Date: Fri, 10 Apr 1992 16:51:32 GMT
References: <92098.170625JPE1@psuvm.psu.edu> <1992Apr8.215800.18021@mp.cs.niu.edu> <1992Apr9.204735.21732@psych.toronto.edu>
Nntp-Posting-Host: spssrs7.spss.com
Organization: SPSS Inc.
Lines: 66

>In article <1992Apr8.215800.18021@mp.cs.niu.edu> rickert@mp.cs.niu.edu 
(Neil Rickert) writes:
>> And by the way,
>>I suggest that you ask your bank to transfer all of your accounts to me.  They
>>are clearly useless to you, since they are mere formal manipulations of a
>>computer without semantic content.  But once transferred to me I suspect I
>>can squeeze enough semantics out of them to buy myself a few good meals.

and in <1992Apr9.204735.21732@psych.toronto.edu> michael@psych.toronto.edu 
(Michael Gemar) replies:
>Come on, Neil, surely you know better than this!  Money is not intrisically
>"in" the bank's program.  Heck, if everyone decided to interpret *your*
>bank's computer as playing chess instead, how would you prove them wrong?
>Grab a wad of greenbacks out of its RAM?
>
>As always, the instantiated program can be *interpreted* as having meaning.
>Whether that meaning is intrinsic to the program, however, is the question.

Neil's bank's computer has money in it by social convention; or to put it
another way, we've assigned a monetary interpretation to what it does.
As you say, if "everyone decided" that it was playing chess, it wouldn't
be money anymore.  

Computers are easy targets for this sort of change of interpretation,
because 1) we know that they can be reprogrammed so easily, 2) their
algorithms can be seen as a type of formal system, and formal systems
are contentless, and 3) their programs generally contain so little encoding
of real-world knowledge that it's hard to make a case that they contain
any intrinsic semantics.

But why are human beings immune from the same kind of attack?  What makes
us think that the biochemical soup in our skulls instantiates a person
(not, for instance, a bank account or a chess machine)?  Why do we 
privilege just one interpretation of what's going on in the brain?

Introspection is irrelevant here, because it begs the question.  Under
one interpretation of what's in the brain, it contains a mind and that
mind believes itself to have semantics.  But to conclude therefrom that
the brain has semantics is to accept, a priori, that interpretation.
We're trying to decide what interpretation to assign; assuming a
particular interpretation must follow, not precede, this process.

Naturally, we all do prefer this interpretation (we do think of ourselves
as persons).  Why?  Not for very airtight reasons: 1) because we're 
rendering verdict on ourselves-- the person in the brain is trying to
decide if brains contains persons-- and we're naturally biased to say yes;
2) the interpretation is the most straightforward one possible, and
leaves the least unexplained (an interpretation of the brain as a chess
playing machine would have to be fairly outre'); 3) on a physical level
we need something to run our bodies and interact with the world and 
it's reasonable to suppose that nature has provided us with a mind
(and not a chess machine) to do so; 4) we haven't figured out a way
to reprogram ourselves (make the brain be something besides a mind).

The problem is, these arguments could be applied to an intelligent
robot, too: 1) it's a person because it thinks it's one; 2) that its
program refers to the outside world is the most straightforward
explanation of it; 3) it was designed as an intelligent system, so
that's what it is; 4) a robot is too expensive to use as anything
but a robot.

Let me try to reduce this overlong posting to a slogan: brains have minds
only under one interpretation of what they're doing; but we have good
reasons to privilege that interpretation.  We might have similar
good reasons to privilege an interpretation of an AI system as
having a mind too.


