From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!uunet!mcsun!uknet!edcastle!aiai!jeff Tue Jan 21 09:27:35 EST 1992
Article 2937 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Searle and the Chinese Room
Message-ID: <6033@skye.ed.ac.uk>
Date: 21 Jan 92 01:59:31 GMT
References: <5909@skye.ed.ac.uk> <1992Jan10.005426.24694@bronze.ucs.indiana.edu> <5949@skye.ed.ac.uk> <1992Jan12.214251.21761@bronze.ucs.indiana.edu>
Reply-To: jeff@aiai.UUCP (Jeff Dalton)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 76

In article <1992Jan12.214251.21761@bronze.ucs.indiana.edu> chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:
>In article <5949@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes:
>
>>For the rest, the way in which a recipe is analogous to a program
>>is that it is a set of instructions that can be followed by a
>>computer/person for manipluating ingrediants in certain ways.
>>What about programs is analogous to saying what the ingrediants
>>are?
>
>As with every analogy, one shouldn't expect every element to carry
>though.  The main point of the analogy is that recipes and programs
>are both syntactic objects that effectively act as specifications or
>descriptions for physical systems, and for which there exist
>implementation procedures by which one can go from the syntactic
>object to the physical system.  Blueprints and houses might work
>as well as recipes and cakes.

The analogy still seems skewed to me.  The design for the hardware 
tells you how to build the machine.  That seems analogous to
blueprints and recipes.  The program tells you how to build a
machine, in a sense, by putting a "universal" machine into
a state where it contains the machine language version of a
program.  But a high-level language does not specify machine
states in any direct way.

>>In any case, the only way a recipe can specify the physico-chemical
>>properties of a cake is by relying on the physico-chemical properties
>>of the ingrediants.  The recipe doesn't produce something that's
>>crumbly, a physical property, without the aid of something that
>>already has physical properties.
>
>Indeed: and by analogy, the program doesn't produce something that
>has causal organization without the aid of something (an implementing
>device) that already has causal organization.

You get a cake, or not, depending on the ingrediants.  So
you get crumbliness or not, depending on the ingrediants.
The analogy would be that you get intentionality or not,
depending on the ingrediants (eg, whether it's a brain or
a sun4).

>>The recipe producing something
>>crumbly is supposed to map to the program producing something
>>with intentionality, a semantic property.  So, by analogy, the
>>program will need the aid of something that already has semantic
>>properties.
>
>No: just as none of elements involved in producing something
>crumbly need themselves be crumbly, none of the elements that go into
>producing something with intentionality need themselves have
>intentionality (the part/whole fallacy comes to mind here).

You're right.

Nonetheless, the choice of ingrediants plays a key role, as above.

>In any case, the argument doesn't depend on the analogy being
>perfect.

Yes, but the imperfections are such as to make it look like it
might be an argument as much for Searle as against him.

>>Also, how much of your relpy hinges on my use of "employs"?
>>Suppose I'd said "implements" instead?  I thought the strong
>>AI claim was that a person (whether in Chinese Room or not)
>>has a mind as a consequence of implementing the right program.
>
>Actually that's not necessary for the core of strong AI, at least
>as I and many others would want to defend it.  The claim is that
>implementing a program will lead to a mind, not that implementing
>a program is the only way to produce a mind.

So you won't make any claim that human understanding is just a
matter of implementing a program?

-- jeff


