From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!elroy.jpl.nasa.gov!sdd.hp.com!wupost!uunet!mcsun!uknet!edcastle!aiai!jeff Mon Mar  9 18:33:54 EST 1992
Article 4143 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!elroy.jpl.nasa.gov!sdd.hp.com!wupost!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Reference (was re: Multiple Personality Disorder and Strong AI)
Message-ID: <6305@skye.ed.ac.uk>
Date: 28 Feb 92 19:59:35 GMT
References: <1992Feb25.182526.12698@oracorp.com>
Sender: news@aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
Lines: 63

In article <1992Feb25.182526.12698@oracorp.com> daryl@oracorp.com writes:
>Christopher Green writes (in response to Stanley Friesen):

>I think you (like Searle before you) are equivocating on the use of
>the phrase "programs are purely syntactic". A program is certainly a
>syntactic object; it is a formal description, or specification, of a
>class of systems (the "implementations" of the program). Learning to
>program involves (at least in part) learning the syntax of a
>programming language. However, the fact that a program is syntactic
>(as is any formal description) does not mean that the implementations
>of the program are purely syntactic.

How about the manipulations they perform?  Where do they refer to
any semantics of the symbols involved?

>Now, let's turn to the other sense in which it is commonly claimed
>that "programs are purely syntactic". *Computers* don't directly
>manipulate real objects; it cannot, for instance, examine a real
>hamburger, it can only examine a syntactic internal representation of
>a hamburger.

"The" other sense?

Again I recommend:

  From: gudeman@cs.arizona.edu (David Gudeman)
  Newsgroups: comp.ai.philosophy
  Subject: Re: Intelligence Testing
  Message-ID: <11884@optima.cs.arizona.edu>
  Date: 25 Jan 92 00:58:09 GMT
  ]that the human is really a robot.

  Not really.  I'm saying that merely by hypothesizing that a machine is
  able to answer all of the questions, you are hypothesizing that the
  questions do not really test understanding.  _By hypothesis_ then, the
  test you propose does not test understanding.

  I hasten to point out that my assertion does not come from a prior
  assumption that machines don't understand, but from my view of
  "understanding" and of how machines work.  I know that machines work
  by taking input, shuffling it according to some set of rules, and
  spitting the result out.  So if a machine can answer the questions,
  then there is a set of rules that can be followed to turn the
  questions automatically into answers.  But if such a set of rules
  exist, then any question can be answered simply by following the
  rules.

  Such an answer does not show understanding of the subject (or even of
  the question), it only shows correct application of the rules.  So
  once you assume the existence of such a set of rules, then questions
  no longer test the understanding of anything, human or machine.

  Etc.

Or Chris Malcolm's:

  On the other hand, I think it remains true that what a program does
  is to transform some input data into some output data, and that this
  transformation can only be purely syntactic. This seems to me to
  pull the rug out from under the "English reply". Anyone care to
  comment?

-- jd


