From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!pipex!ibmpcug!ibmpcug!slxsys!uknet!edcastle!cam Sun May 31 19:04:39 EDT 1992
Article 5961 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!pipex!ibmpcug!ibmpcug!slxsys!uknet!edcastle!cam
>From: cam@castle.ed.ac.uk (Chris Malcolm)
Newsgroups: comp.ai.philosophy
Subject: Re: Grounding: Virtual vs. Real
Message-ID: <21987@castle.ed.ac.uk>
Date: 28 May 92 21:22:56 GMT
References: <1992May25.214006.29965@Princeton.EDU> <1992May26.022413.14151@mp.cs.niu.edu> <1992May27.183408.4868@spss.com>
Organization: Edinburgh University
Lines: 22

In article <1992May27.183408.4868@spss.com> markrose@spss.com (Mark Rosenfelder) writes:
>In article <1992May26.022413.14151@mp.cs.niu.edu> rickert@mp.cs.niu.edu 

>In Harnad's view, adding or removing transducers *doesn't* change
>anything... for the internal calculating engine, which he doesn't consider
>intelligent.  The robot as a whole-- engine + transducers (perhaps + other '
>non-computational elements)-- *is* intelligent; 

I would prefer to say that only the robot-in-its-world is intelligent.
Its computer-brain is not, nor is the entire robot including its
transducers. The transducers are important only because they mediate
its mind-world connection. Mentalistic terminology can only properly
be applied to the robot-in-its-world. Truncating the requirement for
groundedness to the transducers allows one to call a bottled creature,
or a kitten in a basket (which fails to learn the significant joints
in its world because it can't _control_ its perceptions), grounded --
which they aren't; and it forces one to claim that a creature living
in a simulated worls _can't_ be grounded, which it can.
-- 
Chris Malcolm    cam@uk.ac.ed.aifh          +44 (0)31 650 3085
Department of Artificial Intelligence,    Edinburgh University
5 Forrest Hill, Edinburgh, EH1 2QL, UK                DoD #205


