From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!asuvax!gatech!europa.asd.contel.com!uunet!mcsun!uknet!edcastle!aisb!jeff Tue Jan 28 12:16:51 EST 1992
Article 3082 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!asuvax!gatech!europa.asd.contel.com!uunet!mcsun!uknet!edcastle!aisb!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Searle and the Chinese Room
Message-ID: <1992Jan23.222251.24486@aisb.ed.ac.uk>
Date: 23 Jan 92 22:22:51 GMT
References: <1992Jan12.214251.21761@bronze.ucs.indiana.edu> <6033@skye.ed.ac.uk> <1992Jan22.201656.22109@bronze.ucs.indiana.edu>
Sender: news@aisb.ed.ac.uk (Network News Administrator)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 35

In article <1992Jan22.201656.22109@bronze.ucs.indiana.edu> chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:
>In article <6033@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes:
>
>>The analogy still seems skewed to me.  The design for the hardware 
>>tells you how to build the machine.  That seems analogous to
>>blueprints and recipes.  The program tells you how to build a
>>machine, in a sense, by putting a "universal" machine into
>>a state where it contains the machine language version of a
>>program.  But a high-level language does not specify machine
>>states in any direct way.
>
>I'd put the point by saying that the program underspecifies the machine,
>just as a blueprint or recipe does.  There are lots of given machines
>that implement a given program, and lots of different houses that
>implement a given blueprint, but they all have relevant properties in
>common.

But to that I can just repeat what I wrote before:

You get a cake, or not, depending on the ingrediants.  So
you get crumbliness or not, depending on the ingrediants.
The analogy would be that you get intentionality or not,
depending on the ingrediants (eg, whether it's a brain or
a sun4).

>>So you won't make any claim that human understanding is just a
>>matter of implementing a program?
>
>I don't think that it's an important claim for AI to make.  There
>may be a sense in which it's true.  What counts is that there exist
>programs such that implementions of these programs have all the
>essential features of mentality in humans.

So there could be human understanding and this other kind, which
are not the same?  Or the differences are all nonessential?


