From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!uwm.edu!ux1.cso.uiuc.edu!mp.cs.niu.edu!rickert Tue Mar 24 09:55:44 EST 1992
Article 4457 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!uwm.edu!ux1.cso.uiuc.edu!mp.cs.niu.edu!rickert
>From: rickert@mp.cs.niu.edu (Neil Rickert)
Subject: Re: The Systems Reply I
Message-ID: <1992Mar14.213045.21776@mp.cs.niu.edu>
Organization: Northern Illinois University
References: <1992Mar12.001918.2564@ccu.umanitoba.ca> <BL1p0D.6II@world.std.com> <1992Mar14.182737.15329@psych.toronto.edu>
Date: Sat, 14 Mar 1992 21:30:45 GMT
Lines: 67

In article <1992Mar14.182737.15329@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
>
>The first premise is simply that syntactic manipulation cannot on its own
>yield semantics, which has been argued to be an analytic truth by many
>philosophers who are not involved in the AI debate.
>
>The second premise comes from the foundational assumptions of AI, namely,
>that all relevant human cognitive activity is computable, and can be 
>replicated by the appropriate functional relations.  These relations are,
                                                      ^^^^^^^^^^^^^^^^^^^
>by their nature, at base syntactic.  This premise is adopted by Searle in
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>order to examine the implications of Strong AI.

>The Chinese Room gedanken, as I have argued many times, is not in itself
>the crucial argument.  It is merely an attempt to demonstrate the truth
>of the claim that syntactic manipulations can't yield semantics.  Even

  There has been much discussion of the problem that "understanding" is
not well defined.  But an even more serious problem is that neither
"syntax" nor "semantics" are well defined.

  The claim "syntax can't yield semantics" is intuitively appealing, but
this does not make it true, and indeed it is a statement without content
until we have precise definitions of "syntax" and of "semantics".  In the
intuitive sense, surely syntax cannot yield semantics.  But I have a bunch
of compact disks which are encoded by by computers, yet can produce great
symphonies when inserted in a CD player.  If you wish to say that everything
a computer can do is merely syntactic manipulation, clearly some of those
manipulations can yield semantics.

 The sentence I have underscored is at the heart of the debate.  If we
restrict "syntax" to its intuitive sense, that statement is pure nonsense,
and anybody who claims it to be true does not fully understand the capabilities
of computers.  If, on the other hand, we treat the underscored statement as
a definition of "syntax", then this is a much broader view of syntax, and
one which the CR argument does not touch.  In effect the CR argument uses
a broad interpretation of syntax to set the scene, but then resorts to the
much narrower intuitive interpretation to get the contradiction.

 --------

 Please understand that I am not trying to initiate a tiresome debate about
defining "syntax" and "semantics".  Much of the discussion of the CR
argument is pointless.

 More to the point:

	There can be no final convincing proof that strong AI is
	possible until there is an actual implementation.

	There can be no final convincing proof that strong AI is
	impossible until all terms are precisely defined.  Roughly
	speaking, any 'proof of impossibility' must be a purely
	syntactic proof, and perhaps what the CR argument really
	does is demonstrate the impossibility of producing
	such a 'proof of impossibility'.

 Until there are final convincing proofs, the discussion amounts mostly to
an exchange of opinions in which there is no common agreement as to the
meaning of the terms.

-- 
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=
  Neil W. Rickert, Computer Science               <rickert@cs.niu.edu>
  Northern Illinois Univ.
  DeKalb, IL 60115                                   +1-815-753-6940


