From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!natinst.com!news.dell.com!swrinde!zaphod.mps.ohio-state.edu!moe.ksu.ksu.edu!kuhub.cc.ukans.edu!spssig.spss.com!markrose Mon May 25 14:06:55 EDT 1992
Article 5819 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!natinst.com!news.dell.com!swrinde!zaphod.mps.ohio-state.edu!moe.ksu.ksu.edu!kuhub.cc.ukans.edu!spssig.spss.com!markrose
>From: markrose@spss.com (Mark Rosenfelder)
Newsgroups: comp.ai.philosophy
Subject: Re: Mean thoughts on what meaning means
Message-ID: <1992May21.185857.42527@spss.com>
Date: 21 May 92 18:58:57 GMT
References: <1992May15.152549.13330@psych.toronto.edu> <1992May16.003049.6758@spss.com> <9409@scott.ed.ac.uk>
Organization: SPSS Inc.
Lines: 40
Nntp-Posting-Host: spssrs7.spss.com

In article <9409@scott.ed.ac.uk> sharder@cogsci.ed.ac.uk (Soren Harder) writes:
>markrose@spss.com (Mark Rosenfelder) writes:
>>Amazing thing, transducers.  There's something grounded to the world on
>>one side, and meaningless symbols on the other.  Very interesting; 
>>apparently a physical device can desemanticize its inputs.  This suggests
>>that we could build a resemanticizer to turn symbols back into 
>>meaning-bearing stuff...
>
>No it doesn't. You cannot deduce the possibility of going one way from
>the possibility of going the other. E.g. you can turn a living cow
>into a steak, but you cannot turn a steak into a living cow.

I didn't say it *proved* the possibility, only suggested it.  You can
go from electrical signals to movements of a membrane, and you can go the
other way, too.  I was arguing against the Searle-style contention that
the output of the transducers has removed meaning from the inputs.  If
that were true we'd need some demonstration that a reverse process couldn't
take place.

MR:
>>Better yet, why don't we move the boundaries of the system outward a bit,
>>so the transducers are part of the system?  Does the system *including
>>transducers* understand?  

If I understand him, this is precisely what Harnad is maintaining.  To be
precise, what is intelligent is not a computational system that happens
to have transducers added to it; what is intelligent is a robotic system
that contains transducers and other devices, some of which may be 
computational.  Connected to a simulated reality, without transducers,
the computer is unintelligent; connected to a real or virtual reality,
the robot with transducers is intelligent.  That answers some of the 
recent queries on the net.

SH:
>My reference is a paper presented at the CNLS conference on emergent
>computation in Los Alamos, May 1989 ('Submitted to Physica D.'):
>Stevan Harnad(1989): The symbol grounding problem.

Thanks; however, I now have a stack of Harnad sitting on my desk,
acquired via the ftp access he posted about.


