From newshub.ccs.yorku.ca!torn!cs.utexas.edu!uunet!haven.umd.edu!darwin.sura.net!spool.mu.edu!yale.edu!ira.uka.de!chx400!aragorn.unibe.ch!optolab!larkum Wed Sep 23 16:54:09 EDT 1992
Article 6952 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!uunet!haven.umd.edu!darwin.sura.net!spool.mu.edu!yale.edu!ira.uka.de!chx400!aragorn.unibe.ch!optolab!larkum
>From: larkum@iam.unibe.ch (Matthew Larkum)
Subject: Re: Grounding
Message-ID: <1992Sep17.095955.10475@aragorn.unibe.ch>
Sender: news@aragorn.unibe.ch
Reply-To: larkum@iam.unibe.ch
Organization: Physiological Institute, University of Berne, Switzerland
References: <1992Sep17.001119.19562@Princeton.EDU>
Date: Thu, 17 Sep 1992 09:59:55 GMT
Lines: 88

In article 19562@Princeton.EDU, harnad@phoenix.Princeton.EDU (Stevan Harnad) writes:
>
>Here is the Symbol Grounding Problem. It is NOT solved by "grounding"
>one interpretable symbol system in another interpretable symbol system.
>The grounding needs to be autonomous, not parasitic on yet another
>interpretation:
>
>   A symbol system is a set of physical tokens (e.g., scratches on paper,
>   holes on a tape, flip-flop states in a computer) and rules for
>   manipulating them (e.g., erase "0" and write "1"). The rules are purely
>   syntactic: They operate only on the (arbitrary) shapes of the symbols,
>   not their meanings. The symbols and symbol combinations can be given a
>   systematic semantic interpretation, for example, they can be
>   interpreted as meaning objects ("cat," "mat") or states of affairs
>   ("the cat is on the mat"). The symbol grounding problem arises from
>   the fact that the meanings of the symbols are not grounded in the
>   symbol system itself. They derive from the mind of the interpreter.
>   Hence, on pain of infinite regress, the mind cannot itself be just a
>   symbol system, syntactically manipulating symbols purely on the basis
>   of their shapes. The problem is analogous to attempting to derive
>   meaning from a Chinese/Chinese dictionary if one does not first know
>   Chinese. One just goes around in endless circles, from meaningless
>   symbol to meaningless symbol. The fact that such a trip through the
>   dictionary is systematically interpretable (to a Chinese-speaker) is of
>   no help, because the interpretation is not grounded in the dictionary
>   itself. Hence "Strong AI," the hypothesis that cognition is symbol
>   manipulation, is incorrect, as Searle has argued.
>
>   How can one ground the meanings of symbols within the symbol system
>   itself? This is impossible in a pure symbol system, but in a hybrid
>   system, one based bottom-up on nonsymbolic functions such as
>   transduction, analog transformations and sensory invariance extraction,
>   the meanings of elementary symbols can be grounded in the system's
>   capacity to discriminate, categorize and name the objects and states of
>   affairs that its symbols refer to, based on the projections of those
>   objects and states of affairs on its sensory surfaces. The grounded
>   elementary symbols -- the names of the ground-level sensory object
>   categories -- can then be rulefully combined and recombined into
>   higher-order symbols and symbol strings. But unlike in a pure symbol
>   system, these symbol manipulations would not be purely syntactic ones,
>   constrained only by the arbitrary shapes of the symbol tokens; they
>   would also be constrained by (indeed, grounded in) the nonarbitrary
>   shapes of the distal objects, their proximal sensory projections, the
>   analogues of the sensory projections that subserve discrimination, and
>   the learned and innate sensory invariants that subserve categorization
>   and naming.
>
>The solution to the symbol grounding problem, then, will come from an
>understanding of the mechanisms capable of accomplishing sensory
>categorization and the learning of concrete and abstract categories.
>Among the canidates are sensory icons and neural nets that learn
>sensory invariants.
>
>-- 
>Stevan Harnad  Department of Psychology  Princeton University 
>& Lab Cognition et Mouvement URA CNRS 1166 Universite d'Aix Marseille II
>harnad@clarity.princeton.edu / harnad@pucc.bitnet / srh@flash.bellcore.com 
>harnad@learning.siemens.com / harnad@gandalf.rutgers.edu / (609)-921-7771

I agree wholeheartedly with the conclusion that the solution to the grounding
problem lies in an understanding of how a system deals with sensory input.
However, to me it doesn't seem obvious that this excludes the strong AI
hypothesis.  What is to prevent a pure symbol system from including as 
symbols all the possible combinations of sensory input?  Given any animal
which you consider to have cognition, is it clear, as yet, whether or not 
the sensory input can be classified into a given set of symbols?  My gut
feeling is that sensory input could be interpreted as a set or symbols given
some arbitrary granularity imposed on the sensory device.

It seems to me that the analogy of a "Chinese-Chinese Dictionary" is a 
straw-man arguement.  It is certainly true that if one were to go into
an empty room with such a dictionary there would be little chance learning
Chinese.  However, why should we retrict pure symbol systems to "go into
an empty room" metaphorically, when we can give them the "rule system" of
the external world to add to their own internal symbol manipulation rules?
I believe it is possible to learn Chinese using a Chinese-Chinese 
dictionary.  In a sense, a billion or so Chinese have demonstrated this to
be the case.  Of course, no one knows for sure whether the processing they
use is pure symbol manipulation but it hasn't been ruled out by the above
argument either.
_______________________________________________________________________
Matthew Larkum                         | Science:                       
Physiologisches Institut               | "One small step for man,       
Buehlplatz 5, CH-3012 Bern Switzerland |  another small step for man... 
Ph. 41 31 658726 Fax. 41 31 654611     |  I wish I were an astronaut!"  
Internet: larkum@optolab.unibe.ch      |                                
      matthewl@cortex.physiol.su.OZ.AU |  



