From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!ames!haven.umd.edu!darwin.sura.net!europa.asd.contel.com!uunet!mcsun!uknet!edcastle!edcogsci!sharder Mon May 25 14:06:49 EDT 1992
Article 5808 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!ames!haven.umd.edu!darwin.sura.net!europa.asd.contel.com!uunet!mcsun!uknet!edcastle!edcogsci!sharder
>From: sharder@cogsci.ed.ac.uk (Soren Harder)
Newsgroups: comp.ai.philosophy
Subject: Re: Mean thoughts on what meaning means
Message-ID: <9409@scott.ed.ac.uk>
Date: 19 May 92 15:07:09 GMT
References: <1992May14.164117.25016@psych.toronto.edu> <1992May14.221449.3721@spss.com> <1992May15.152549.13330@psych.toronto.edu> <1992May16.003049.6758@spss.com>
Organization: Centre for Cognitive Science, Edinburgh, UK
Lines: 86

markrose@spss.com (Mark Rosenfelder) writes:

>In article <1992May15.152549.13330@psych.toronto.edu> michael@psych.toronto.edu 
>(Michael Gemar) writes:

>Amazing thing, transducers.  There's something grounded to the world on
>one side, and meaningless symbols on the other.  Very interesting; 
>apparently a physical device can desemanticize its inputs.  This suggests
>that we could build a resemanticizer to turn symbols back into 
>meaning-bearing stuff...

No it doesn't. You cannot deduce the possibility of going one way from
the possibility of going the other. E.g. you can turn a living cow
into a steak, but you cannot turn a steak into a living cow.

>Better yet, why don't we move the boundaries of the system outward a bit,
>so the transducers are part of the system?  Does the system *including
>transducers* understand?  Note that we are no longer talking about a
>system of pure symbol manipulation (tho' we haven't added much more).

>Or let's look at it another way: why do you want to consider the output
>of the transducers as merely symbolic?  Because in a digital computer the
>output is in the form of binary data, identical in low-level form to
>program code or data?  Well, what if we invent a new computer that can
>manipulate photographs, sound recordings, etc., directly, *without* so
>translating?  (That is, where a traditional computer is restricted to
>operations on bit patterns, our new computer can operate on photographs etc.)
>Now we can throw out the transducers; and they will no longer be able to
>skim off the world-grounding nature of the inputs.  Does the system
>understand now?

>If you think such a computer isn't possible, I'll give you one: Searle
>sitting in the Chinese room.  If the rulebooks contain pictures which
>Searle can process, it's much harder to maintain that there is no way
>to ground the Room's symbols in reality.

>To put it another way, yes, perhaps you could learn Chinese from a Chinese-
>Chinese dictionary-- if it were illustrated.

If you understand the pictures, yes. Harnad himself takes the
chinese-chinese experiment one step further:
'Suppose you had to learn Chinese as a _first_ language and the only
sorce of information you had was a Chinese/Chinese dictionary! This is
more like the actual task faced bu a pruely symbolic model of the
mind.' (Harnad (1989): The symbol grounding problem)

>Actually, I don't really believe that transducers remove world-grounding.

On the contrary. They create grounding!

>After all, our own brains only have access to the world through a similar
>mechanism: sounds, pictures, sensations, all are transmitted to the brain
>through the medium of neuron firings.  If we can be grounded to the world
>with such a system, so can a robot.

Yes, I agree. And so does Harnad. But this is only if there is a
symbol grounding. (Harnad claims you need Connectionism for this
specific purpose).

>Why not say that SHRDLU is grounded in a virtual world, not the real world?
>Or, if you like, divide the program into two systems, the world simulator
>and the AI algorithm.  If the latter is correctly designed, it shouldn't
>matter if its i/o relates to the real world or to a virtual world.  The
>algorithm portion is grounded in whichever world it's connected to.

I guess that is possible. On the one hand it is not worth a lot,
because you have no way of showing in what ways it is equivalent to
our world. On the other hand, it puts up the problem: Why wouldn't we
say that e.g. a Pacman program has symbol grounding, not to our
world (only through the programmer), but to the imaginary 'Pacman world'?

>By the way, do you have a citation for Harnad's argument which you summarize
>above?  It sounds like something I'd like to read.

My reference is a paper presented at the CNLS conference on emergent
computation in Los Alamos, May 1989 ('Submitted to Physica D.'):
Stevan Harnad(1989): The symbol grounding problem.


Soren Harder

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Soren Harder, (MSc student)
Centre for Cognitive Science, 2 Buccleuch Place, Edinburgh
E-mail: sharder@cogsci.ed.ac.uk
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^


