From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!wupost!uunet!mcsun!uknet!edcastle!aiai!jeff Thu Jan 16 17:19:40 EST 1992
Article 2641 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!wupost!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Searle and the Chinese Room
Message-ID: <5949@skye.ed.ac.uk>
Date: 10 Jan 92 19:34:00 GMT
References: <5815@skye.ed.ac.uk> <1991Dec12.193222.27298@bronze.ucs.indiana.edu> <5909@skye.ed.ac.uk> <1992Jan10.005426.24694@bronze.ucs.indiana.edu>
Reply-To: jeff@aiai.UUCP (Jeff Dalton)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 85

In article <1992Jan10.005426.24694@bronze.ucs.indiana.edu> chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:
>In article <5909@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes:
>
>>We all know that we can follow a recipe to produce a cake.
>>But it's hard to see how that's the same as instantiating
>>(another word you've changed) the recipe, much less that it's
>>the same as relying only on this instantiation to produce the
>>cake.  We at least have to use the right materials, and it's
>>the physical properties of the materials that ensure we get
>>a cake in the end rather than a bowl of sludge.
>
>The implementation of both programs and recipes (I don't use the word
>"instantiation", as I don't think that computers instantiate programs,
>they implement them) requires that the corresponding physical system has
>certain physical properties, specified by the program/recipe.  For
>recipes, these are physico-chemical properties; for programs, these
>are causal properties.

I don't see much problem with saying a computer instantiates
a program, but let that go.

For the rest, the way in which a recipe is analogous to a program
is that it is a set of instructions that can be followed by a
computer/person for manipluating ingrediants in certain ways.
What about programs is analogous to saying what the ingrediants
are?

In any case, the only way a recipe can specify the physico-chemical
properties of a cake is by relying on the physico-chemical properties
of the ingrediants.  The recipe doesn't produce something that's
crumbly, a physical property, without the aid of something that
already has physical properties.  The recipe producing something
crumbly is supposed to map to the program producing something
with intentionality, a semantic property.  So, by analogy, the
program will need the aid of something that already has semantic
properties.

It looks to me like your analogy proves Searle's point, not yours.

>Same point.  "Egg" certainly has to lead to eggs in the implementation,
>and "S1->S2" has to lead to the appropriate causation in the
>implementation.

The instructions in the recipe, no the eggs, are analogous to s1->s2.
If you want to argue by analogy ("you should think about programs and
minds as you do about recipes and cakes"), the analogy ought to be one
that makes it easy to transfer one's understanding from recipes and
cakes to programs and minds.  If the mappping is just weird and
confusing, you should try a different approach.  That is, if your
aim is to help someone to understands you, not just to make it
harder for them to argue against you.

>>A person following a cake recipe doesn't
>>become crumbly; but if computational theories of mind are
>>correct, something that employs the right program does become
>>"minded".
>
>No -- as any defender of the Systems reply will tell you, the person
>following the program doesn't gain a mind any more than the person
>following a recipe becomes a cake.

So the person-and-cake system becomes crumbly?  I'm not sure
how you think it works on that side.  Not to mention that
cakes do not implement recipes.  

Also, how much of your relpy hinges on my use of "employs"?
Suppose I'd said "implements" instead?  I thought the strong
AI claim was that a person (whether in Chinese Room or not)
has a mind as a consequence of implementing the right program.

I think my remark is clearer in context, thus:

   Finally, note that Searle says "having a mind" while you say
   "produce a cake".  The mind would be an additional property
   the computer would gain by instantiating the program, not
   an external object.  A person following a cake recipe doesn't
   become crumbly; but if computational theories of mind are
   correct, something that employs the right program does become
   "minded".

Note too that I was making a new point here, not the same one I
was making in the parts discuessed earlier.  (Just in case this
wasn't clear.)

-- jeff


