From newshub.ccs.yorku.ca!ists!torn.onet.on.ca!utgpu!news-server.csri.toronto.edu!rutgers!uwm.edu!caen!kuhub.cc.ukans.edu!spssig.spss.com!markrose Tue Jun  9 10:08:02 EDT 1992
Article 6166 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!torn.onet.on.ca!utgpu!news-server.csri.toronto.edu!rutgers!uwm.edu!caen!kuhub.cc.ukans.edu!spssig.spss.com!markrose
>From: markrose@spss.com (Mark Rosenfelder)
Newsgroups: comp.ai.philosophy
Subject: Re: Transducers
Message-ID: <1992Jun08.225734.32166@spss.com>
Date: 8 Jun 92 22:57:34 GMT
References: <1992Jun6.163918.24479@news.media.mit.edu> <BILL.92Jun6194350@ca3.nsma.arizona.edu> <60791@aurs01.UUCP>
Organization: SPSS Inc.
Lines: 56
Nntp-Posting-Host: spssrs7.spss.com

In article <60791@aurs01.UUCP> throop@aurs01.UUCP (Wayne Throop) writes:
(quoting Bill Skaggs):
>> "Intentionality" is the relationship between thoughts and the things
>> in the world that they are about.  To make his claim plausible, Stevan
>> must show that there is some aspect of this relationship that
>> necessarily follows from the TTT but not from the TT.  Clearly there
>> *is* such an aspect: Stevan calls it "transduction", but even if we
>> don't like the word, we still have to admit that a system with robotic
>> capabilities relates to the objects of *our* intentionality in a
>> different way than a system without such capabilities: it can sense
>> and manipulate those objects, rather than merely talking about them.
>
>Well, "everyone talks about the weather, but nobody does anything
>about it".  Does that mean that human intentionality about the
>weather somehow differs from our intentionality about our clothing
>or other objects we CAN manipulate?  This seems very odd to me.

I doubt it really seems odd to you. :)  Surely, as a general principle, 
you'd agree that your knowledge of something is lessened without direct
experience or without manipulative experience.  For instance, you don't have any
direct experience with hypercubes.  You can know something about them,
and your statements about them can have meaning, but you don't know as
much or mean as much as a 4-dimensional creature which can directly
manipulate them.

>Further, I ask again, what is the difference IN PRINCIPLE between
>a computer being able to turn a pixel on and off, and thus manipulate
>the world by the pattern of light shed upon it, and a robot being able
>to turn a servomotor on and off, and thus manipulate the world by
>the pressure of a robotic finger "shed upon it"?
>
>In other words, it has always seemed to me that computers CAN 
>"manipulate those objects, rather than merely talking about them",
>and I don't understand why those who disagree with this position do so.
>
>The difference between a computer and a robot is merely which effectors
>and sensors are considered part of the entity.

I think this is a bit disingenuous.  Do you really think that the computer
you are reading this text on is just a handicapped robot?

Programming an intelligent robot would be a very different task from 
programming a computer to pass a (teletype) Turing Test.  Much of the robot's 
program (as we could predict from our knowledge of the brain) would be 
dealing with interpreting sensory input, driving motor output, and 
controlling the robot's internal physical functionality.  The purely 
linguistic portion of the robot's program might be small by comparison,
and its design might be intimately affected by the interfaces to the
sensorimotor capacity.

Harnad may or may not be right about transducers being necessary for symbol
grounding.  But surely his insistence on the importance of robotic
interaction with the world is only common sense.  The burden of proof,
it seems to me, is with those who identify "intelligence" only with the
small portion of the robotic program-- or the human brain-- which is
*not* directly concerned with such interactions.


