From newshub.ccs.yorku.ca!torn!utcsri!rpi!zaphod.mps.ohio-state.edu!caen!spool.mu.edu!sgiblab!a2i!pagesat!spssig.spss.com!markrose Thu Oct  8 10:10:34 EDT 1992
Article 7054 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!torn!utcsri!rpi!zaphod.mps.ohio-state.edu!caen!spool.mu.edu!sgiblab!a2i!pagesat!spssig.spss.com!markrose
>From: markrose@spss.com (Mark Rosenfelder)
Subject: Re: Grounding
Message-ID: <1992Sep28.165125.6660@spss.com>
Sender: news@spss.com (Net News Admin)
Organization: SPSS Inc.
References: <1992Sep25.160149.26882@spss.com> <717645108@sheol.UUCP>
Date: Mon, 28 Sep 1992 16:51:25 GMT
Lines: 24

In article <717645108@sheol.UUCP> throopw@sheol.UUCP (Wayne Throop) writes:
>As near as I can tell, "symbols" are purely in the mind of the beholder,
>and so to say that "humans don't interact with the rest of the world via
>symbols" and "computers do interact with rest of the world via symbols"
>are essentially meaningless.  There is no objective fact of the matter
>for either of these claims. 
>
>Similarly, the claim that computers "don't transduce" real-world stimuli
>and humans do seems very ill thought out.  Both transduce real-world
>stimuli.  There is no extremely strong difference I can see between a
>photoreceptor in the human eye firing or a key contact in a keyboard
>closing, or between a pixel turning on and a muscle fiber contracting. 

There's something of the same difference between two rocks banging together
and the Berlin Philharmonic.

For humans, the meaning of thousands of words is grounded directly in
rich sensory and motor experience.  The same could conceivably by true of
a robot.  What is experienced directly by the computer attached to a keyboard?  
One would be hard pressed to say that it even knows what clicking the keys 
feels like, since its experience has none of the information density of 
human touch.

How is it that words mean anything, in your view?


