From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!spool.mu.edu!agate!stanford.edu!rutgers!micro-heart-of-gold.mit.edu!news.media.mit.edu!minsky Thu Dec 26 23:57:56 EST 1991
Article 2348 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!spool.mu.edu!agate!stanford.edu!rutgers!micro-heart-of-gold.mit.edu!news.media.mit.edu!minsky
>From: minsky@media.mit.edu (Marvin Minsky)
Newsgroups: comp.ai.philosophy
Subject: Re: Searle's response to silicon brain
Message-ID: <1991Dec21.161228.16497@news.media.mit.edu>
Date: 21 Dec 91 16:12:28 GMT
Article-I.D.: news.1991Dec21.161228.16497
References: <BSIMON.91Dec19071828@elvis.stsci.edu> <1991Dec19.141418.1132@linus.mitre.org> <349@tdatirv.UUCP>
Sender: news@news.media.mit.edu (USENET News System)
Organization: MIT Media Laboratory
Lines: 43
Cc: minsky, sarima@tdatirv.UUCP

In article <349@tdatirv.UUCP> sarima@tdatirv.UUCP (Stanley Friesen) writes:
>In article <1991Dec19.141418.1132@linus.mitre.org> jkm@mbunix.mitre.org (Millen) writes:
>|The fact is, that various parts of automobile engines have been
>|computerized, viz., fuel injection, quite successfully.  On the
>|other hand, it is also clear that certain substitutions will fail;
>|if you try to change the explosion in the cylinder into a
>|computer simulation, the engine stops working.  
>
>Certainly.  When you replace a physical operation with its simultion it
>stops working.  But it is important to note that it *stops* working.
>
>As far as I can see Searle is proposing that some system can be operationally
>equivalent to a human and still not possess 'intentionality' or 'causal
>powers'.  This is a very different claim. Its like saying a simulated engine
>would still rotate, but it wouldn't generate any power.
>
>If he *is* claiming that there would be a functional difference, what is
>it?  How do we tell that the Chinese Room lack intentionality?  What observable
>action can a human perform under like conditions that the CR cannot?
>[If the difference is not even theoretically observable, then it is irrelevant].

I like this.  Stanley puts his finger on the central quibble here.
Yes, the simulated engine wouldn't generate any "here in the world
outside the computer" power -- but if you put it in a suitably
simulated car, and engage the suitably simulated clutch, it will just
fine drive down the simulated road.

Ok, so there's a difference.  But about that "intentionality", the
problem is that we CAN observe the answers to questions we ask about
what happens inside the simulation.  So when we ask the simulated
person driving the simulated car questions like, "Is it OK to kill
another human is self-defense?" then we can observe the answer.
'Information' flows freely both ways through the interface, even
though matter and (gross amounts of) energy don't.  Must then we
presume, to make sense of Searle, that intentionality is matter-like
rather than information-like?  If so, isn't he obliged to demonstrate
its weight, or something?

Oh, I had lunch with van Quine a couple of months ago, so I asked him
what he thought about intentionality, expecting to receive some
wisdom.  He simply replied, "There's no such thing."  I was
disappointed at the time, but later realized that I had received more
wisdom than I deserved.


