From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!sol.ctr.columbia.edu!bronze!chalmers Tue Jun  9 10:05:53 EDT 1992
Article 5999 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!sol.ctr.columbia.edu!bronze!chalmers
>From: chalmers@bronze.ucs.indiana.edu (David Chalmers)
Subject: Problems with Symbol Grounding
Message-ID: <1992Jun1.060642.29943@bronze.ucs.indiana.edu>
Organization: Indiana University
Date: Mon, 1 Jun 92 06:06:42 GMT
Lines: 68

The whole cluster of issues surrounding "Symbol Grounding" is quite
interesting, but it seems to be that there are a number of problems
with the position Harnad has put forward.

(1) Firstly and most obviously, the very existence of the issue is
parasitic on the success of Searle's Chinese-Room argument.  If
one doesn't accept the Chinese Room argument, then there's no
deficiency in straight computation that needs rectifying.  (On
the other hand, it is nice to see defenders of Searle going
farther than just pointing to the failure of computation, and
trying to offer a positive diagnosis of what's going on.)

(2) When these views are discussed, people tend to conflate various
different things as being criterial for "real semantics".  Harnad
holds that qualia, i.e. the raw feel of subjective experience, is
what's criterial for "real semantics" -- a somewhat unusual position
that tends to go unnoticed in these discussions.  Other people
don't worry about qualia, and are just concerned with a causal
connection to the world.  It's this latter issue, I think, and not
worries about qualia or Chinese Rooms, that's responsible for the
appeal of the "Symbol Grounding" issue to the AI community.

Call the two different issues "Causal Grounding" and "Internal
Grounding".  The object of the former is to achieve real causal
connections between a system and the world, whereas the latter is
concerned with changing the insides of a system from "symbol
crunching" to something else, in order that we get real qualia
(because of Chinese Room problems).  Most AI people are concerned
with the former as a goal in its own right; Harnad's position, by
contrast, seems to be that the former is mostly of interest insofar
as it might lead to the latter.

(3)  It's almost certainly the case that the "Total Turing Test" can
be passed by a straight computational system whose use of transducers
is confined to the periphery of the system.  (Stevan will usually
remark that "TTT-passers don't have to consist of a symbol-crunching
core with transducers added on", but after some argument he'll
usually concede that in principle, they *can* do so, because
everything internal can be computationally simulated.)

Therefore, if one accepts that TTT-passing is sufficient for
consciousness (or whatever), but that straight computation is not,
one is forced to believe in the existence of two systems: A, which
is purely computational, with computational "virtual input", and B,
which is just like A except that it has transducers at the
periphery; such that A is wholly unconscious, but B has a rich mental
life like our own.

It's very difficult to believe that these relatively simple transducers
could make that much difference.  All we need are a number of "shallow"
A-to-D and D-to-A converters at the periphery of a robot body.  Could
these cause consciousness to suddenly spring into existence?

(4) As somebody else remarked, the TTT is far too strong a condition
to be criterial for anything.  The TT was already too strong to
serve as anything but as sufficiency condition (see e.g. French's
article "Subcognition and the Limits of the Turing Test", in Mind,
1990).  So it seems strange and arbitrary to *require* TTT-capacity
for real grounding, whatever that comes to.  Perhaps a more plausible
case might be made for some weaker condition, such as robust
interaction with the real world, but indistinguishability seems to
be asking for far too much (and will probably never be achieved in
practice by any non-human system).

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."


