From newshub.ccs.yorku.ca!ists!torn.onet.on.ca!utgpu!cs.utexas.edu!wupost!darwin.sura.net!cs.ucf.edu!barros!gomez Mon Jun 15 16:05:07 EDT 1992
Article 6253 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!torn.onet.on.ca!utgpu!cs.utexas.edu!wupost!darwin.sura.net!cs.ucf.edu!barros!gomez
>From: gomez@barros.cs.ucf.edu (Fernando Gomez)
Subject: Re: Transducers
Message-ID: <1992Jun15.062501.21144@cs.ucf.edu>
Sender: news@cs.ucf.edu (News system)
Organization: University of Central Florida, Orlando
References: <1992Jun8.221324.535@mp.cs.niu.edu> <1992Jun9.051649.9894@cs.ucf.edu> <6957@pkmab.se>
Date: Mon, 15 Jun 1992 06:25:01 GMT
Lines: 46

In article <6957@pkmab.se> ske@pkmab.se (Kristoffer Eriksson) writes:
>In article <1992Jun9.051649.9894@cs.ucf.edu> gomez@barros.cs.ucf.edu (Fernando Gomez) writes:
>> But, what if we replace in the question "jello" with "ice cream" would
>>the disembodied robot know what to answer? The answer is "yes" only if the
>>eating-robot has eaten ice cream and learned this knowledge.
>
>Suppose the disembodied robot answers "I am unable to determine that". Would
>that alone lead you to conclude that the robot is not intelligent or not
>conscious?
>
>For my part, it would only lead to the conclusion that the robot perhaps is
>disembodied (or doesn't have ice cream at home, or something else)...
>
>-- 
>Kristoffer Eriksson, Peridot Konsult AB, Hagagatan 6, S-703 40 Oerebro, Sweden
>Phone: +46 19-13 03 60  !  e-mail: ske@pkmab.se
>Fax:   +46 19-11 51 03  !  or ...!{uunet,mcsun}!mail.swip.net!kullmar!pkmab!ske

The point I was making in my reply to Skaggs and Ricket is a logical one.
An embodied robot will be unable to pass its problem solving methods,
insofar as they are embodied, to a disembodied robot. And even, if it could
the other robot will be unable to use them, unless it becomes also an
embodied robot.
Hence, there
will be new situations which the embodied robot can solve and
the disembodied one will not able to solve. Problem-solving methods
based on embodied functions can not be passed to a disembodied robot.
Do I mean by this that intelligence needs to be embodied to be "real
intelligence"?  Absolutely, no. In fact, as I have argumented in previous
postings most of intelligence is of the "disembodied nature," far removed
from the periphery (stimuli, etc.) of our cognitive map and relying on
a strong set of a priori ungrounded (non empirical content) concepts.
The  idea that, by attaching analog devices and sensors to a computer,
"real intelligence" is going to emerge, it seems to me as ungrounded
as the idea that intelligence will emerge as the end result of accumulating
a tremendous mass of encyclopedic knowledge into a computer. 

By the way, this discussion about grounding may become more concrete if
its proponents would tell us the role (if any) that their ideas will
play, say, in solving the missionaries-and-cannibals problem, a classic,
almost forgotten problem in AI, which still remains without solution.
No!- GPS, Soar Prolog, etc. have not solved this problem.
Some people after a lot of work and ingenuity, including the prior
solution of the problem, have "programmed in" a solution in those
methods.



