From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rutgers!gatech!swrinde!cs.utexas.edu!uunet!mcsun!uknet!edcastle!aiai!jeff Tue Mar 24 09:56:29 EST 1992
Article 4523 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rutgers!gatech!swrinde!cs.utexas.edu!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: The Systems Reply I
Message-ID: <6422@skye.ed.ac.uk>
Date: 18 Mar 92 00:12:13 GMT
References: <6374@skye.ed.ac.uk> <1992Mar11.201637.21875@psych.toronto.edu> <44765@dime.cs.umass.edu>
Sender: news@aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
Lines: 61

In article <44765@dime.cs.umass.edu> orourke@sophia.smith.edu (Joseph O'Rourke) writes:
>	It seems to me that the way in which a program manipulates
>its symbols shows that it has attached some type of meaning to them:
>
>(a) As a crude example, if a program passes a double to a function to
>    compute the arctangent, it "knows" in some primitive sense that
>    the bits it is moving around represent a real number, and that
>    the library arctangent function expects such.  The bits are not
>    meaningless:  they are manipulated in a way appropriate for double-
>    precision floating point numbers.

I have a suspicion that this is just our old dispute about whether
there can be different, equally good interpretations of, say, the
inputs and outputs to the Chinese Room.

Suppose I represent floats as strings of letters (in base 26, say)
and do adds and subtracts on them.  In some sense the machine "knows"
these strings represent real numbers?  Well, maybe so.  But why is
that sense a relavant one.

(I'm sure there are better examples for making this point, but that's
the best I could do at this hour.)

Anyway, this was sort of a response to my:

>  >In article <6374@skye.ed.ac.uk> 
>  	jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>  >
>  >>BTW, I still haven't seen a satisfactory answer to the point that
>  >>the Room manipulates meaningless symbols (ie, treats them syntactically)
>  >>without any way to attach meaning to them.  But maybe I've just
>  >>missed it in all the noise.

How does the Room assign the right meaning?  How do we deal with
Putnam's cats and cherries problem?  Can computers do the same?
(Zeleny says no, I think.)  And so on.

Or, to go back to your approach to "understanding X" as being able to
answer questions about X.  Now, the Geometry Room, for example, can
answer questions aboue geometry (even if the person in the Room hasn't
a clue).  But is that because this system understands geometry or
because the programmers (or the mathematicians they consulted)
understand geometry?

In the Chinese Room case, maybe the programmers understand Chinese,
but how does "the system" manage to do it?

I don't expect the AI side to be able to answer this in detail,
of course, and they don't need to if all they want to do (for
now) is to say Searle has failed to show the system doesn't
understand, at least as far as the Chinese Room argument proper
is concerned.  (After all, Searle can have failed to show this
even if the system cannot possibly understand.)

But the situation is a bit different when we come to the "syntax
isn't enough for semantics" arguments.  The person in the Room is
performing some manipulations that depend only on "sytax" such
as the shapes of the squiggles.  And the person doesn't understand
what the symbols mean.  How is it that the system can do any better?

-- jd


