From newshub.ccs.yorku.ca!torn!cs.utexas.edu!uwm.edu!linac!att!princeton!phoenix.Princeton.EDU!harnad Wed Sep 23 16:54:07 EDT 1992
Article 6948 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!uwm.edu!linac!att!princeton!phoenix.Princeton.EDU!harnad
>From: harnad@phoenix.Princeton.EDU (Stevan Harnad)
Subject: Re: Grounding
Message-ID: <1992Sep17.001119.19562@Princeton.EDU>
Summary: It's Symbols you need to ground, and not just in more symbols
Originator: news@nimaster
Keywords: Symbol Grounding Problem
Sender: news@Princeton.EDU (USENET News System)
Nntp-Posting-Host: phoenix.princeton.edu
Organization: Princeton University
References: <BuDr7y.1LA@usenet.ucs.indiana.edu> <20390@plains.NoDak.edu> <1992Sep16.203451.5162@spss.com>
Date: Thu, 17 Sep 1992 00:11:19 GMT
Lines: 136

In article <1992Sep16.203451.5162@spss.com> markrose@spss.com (Mark Rosenfelder) writes:
>In article <20390@plains.NoDak.edu> vender@plains.NoDak.edu (Does it matter?) writes:
>>  In an earlier thread, it was said that a computer based AI could
>>  not be conscious because its inputs lacked grounding in the real
>>  world.  The question is, what if we grounded it in a computer
>>  system (say a UNIX system on the Internet).  Granted it may
>>  be an incomprehensible intelligence, but would it qualify
>>  as having its inputs solidly grounded in its environment
>>  (and thus avoid that argument)?
>
>What folks who talk about "grounding in the real world" mean, I believe, is
>that concepts acquire their meaning by virtue of an immense experience
>of direct physical interaction with the real world.  This would not be
>the case for an AI (merely) running under Unix and/or connected to the
>Internet, so no, such a system wouldn't be grounded.

Here is the Symbol Grounding Problem. It is NOT solved by "grounding"
one interpretable symbol system in another interpretable symbol system.
The grounding needs to be autonomous, not parasitic on yet another
interpretation:

   A symbol system is a set of physical tokens (e.g., scratches on paper,
   holes on a tape, flip-flop states in a computer) and rules for
   manipulating them (e.g., erase "0" and write "1"). The rules are purely
   syntactic: They operate only on the (arbitrary) shapes of the symbols,
   not their meanings. The symbols and symbol combinations can be given a
   systematic semantic interpretation, for example, they can be
   interpreted as meaning objects ("cat," "mat") or states of affairs
   ("the cat is on the mat"). The symbol grounding problem arises from
   the fact that the meanings of the symbols are not grounded in the
   symbol system itself. They derive from the mind of the interpreter.
   Hence, on pain of infinite regress, the mind cannot itself be just a
   symbol system, syntactically manipulating symbols purely on the basis
   of their shapes. The problem is analogous to attempting to derive
   meaning from a Chinese/Chinese dictionary if one does not first know
   Chinese. One just goes around in endless circles, from meaningless
   symbol to meaningless symbol. The fact that such a trip through the
   dictionary is systematically interpretable (to a Chinese-speaker) is of
   no help, because the interpretation is not grounded in the dictionary
   itself. Hence "Strong AI," the hypothesis that cognition is symbol
   manipulation, is incorrect, as Searle has argued.

   How can one ground the meanings of symbols within the symbol system
   itself? This is impossible in a pure symbol system, but in a hybrid
   system, one based bottom-up on nonsymbolic functions such as
   transduction, analog transformations and sensory invariance extraction,
   the meanings of elementary symbols can be grounded in the system's
   capacity to discriminate, categorize and name the objects and states of
   affairs that its symbols refer to, based on the projections of those
   objects and states of affairs on its sensory surfaces. The grounded
   elementary symbols -- the names of the ground-level sensory object
   categories -- can then be rulefully combined and recombined into
   higher-order symbols and symbol strings. But unlike in a pure symbol
   system, these symbol manipulations would not be purely syntactic ones,
   constrained only by the arbitrary shapes of the symbol tokens; they
   would also be constrained by (indeed, grounded in) the nonarbitrary
   shapes of the distal objects, their proximal sensory projections, the
   analogues of the sensory projections that subserve discrimination, and
   the learned and innate sensory invariants that subserve categorization
   and naming.

REFERENCES (all available by anonymous ftp from host princeton.edu
directory pub/harnad):

Harnad, S. (1987) The induction and representation of categories.
In: Harnad, S. (ed.) (1987) Categorical Perception: The Groundwork of
Cognition. New York: Cambridge University Press.
Filename: harnad87.categorization

Harnad, S. (1989) Minds, Machines and Searle. Journal of Theoretical
and Experimental Artificial Intelligence 1: 5-25.
Filename: harnad89.searle

Harnad, S. (1990) The Symbol Grounding Problem.
Physica D 42: 335-346.
Filename: harnad90.sgproblem

Harnad, S. (1990) Against Computational Hermeneutics. (Invited
commentary on Eric Dietrich's Computationalism)
Social Epistemology 4: 167-172.
Filename: harnad90.dietrich.crit

Harnad, S. (1990) Lost in the hermeneutic hall of mirrors. Invited
Commentary on: Michael Dyer: Minds, Machines, Searle and Harnad.
Journal of Experimental and Theoretical Artificial Intelligence
2: 321 - 327.
Filename: harnad90.dyer.crit

Harnad, S. (1991) Other bodies, Other minds: A machine incarnation
of an old philosophical problem. Minds and Machines 1: 43-54.
Filename: harnad91.otherminds

Harnad, S., Hanson, S.J. & Lubin, J. (1991) Categorical Perception and
the Evolution of Supervised Learning in Neural Nets. In:  Working
Papers of the AAAI Spring Symposium on Machine Learning of Natural
Language and Ontology (DW Powers & L Reeker, Eds.) pp. 65-74. Presented
at Symposium on Symbol Grounding: Problems and Practice, Stanford
University, March 1991; also reprinted as Document D91-09, Deutsches
Forschungszentrum fur Kuenstliche Intelligenz GmbH Kaiserslautern FRG.
Filename: harnad91.cpnets

Harnad, S. (1992) Connecting Object to Symbol in Modeling
Cognition.  In: A. Clarke and  R. Lutz (Eds) Connectionism in Context
Springer Verlag.
Filename: harnad92.symbol.object

Hayes, P., Harnad, S., Perlis, D. & Block, N. (1992) Virtual Symposium
on the Virtual Mind. Minds and Machines (in press)
Filename: harnad92.virtualmind

Harnad, S. (1992) The Turing Test Is Not A Trick: Turing
Indistinguishability Is A Scientific Sriterion. SIGART Bulletin 3(4)
(October) pp. 9 - 10
Filename: harnad92.turing

Harnad, S. (1993) Grounding Symbols in the Analog World with Neural
Nets. Think (Special Issue on Machine Learning) (in press)
Filename: harnad92.symb.anal.net

Harnad, S. (1993) Artificial Life: Synthetic Versus Virtual.
Artificial Life III (Santa Fe, June 1992) (to appear)
Filename: harnad92.artlife



The solution to the symbol grounding problem, then, will come from an
understanding of the mechanisms capable of accomplishing sensory
categorization and the learning of concrete and abstract categories.
Among the canidates are sensory icons and neural nets that learn
sensory invariants.

-- 
Stevan Harnad  Department of Psychology  Princeton University 
& Lab Cognition et Mouvement URA CNRS 1166 Universite d'Aix Marseille II
harnad@clarity.princeton.edu / harnad@pucc.bitnet / srh@flash.bellcore.com 
harnad@learning.siemens.com / harnad@gandalf.rutgers.edu / (609)-921-7771


