From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!europa.asd.contel.com!darwin.sura.net!udel!rochester!cantaloupe.srv.cs.cmu.edu!crabapple.srv.cs.cmu.edu!andrew.cmu.edu!fb0m+ Thu Jan 16 17:19:32 EST 1992
Article 2627 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!europa.asd.contel.com!darwin.sura.net!udel!rochester!cantaloupe.srv.cs.cmu.edu!crabapple.srv.cs.cmu.edu!andrew.cmu.edu!fb0m+
>From: fb0m+@andrew.cmu.edu (Franklin Boyle)
Newsgroups: comp.ai.philosophy
Subject: Re: The Robot Reply
Message-ID: <MdPRIqa00WBNM2Vn4j@andrew.cmu.edu>
Date: 10 Jan 92 17:17:42 GMT
Organization: Cntr for Design of Educational Computing, Carnegie Mellon, Pittsburgh, PA
Lines: 23

Daryl McCullough writes:

>Jeff Dalton writes:
> 
>> If Searle is right that without sensory input there is no
>> understanding in computers by virtue of their running the right
>> program, why would adding sensors cause understanding to appear? Why
>> does it matter that some of the squiggles come from sensors?
> 
>If Searle claims that what is missing in a syntactic simulation of a
>mind is that there is no causal connection between the words being
>manipulated and the real-world objects to which the words refer, then
>the use of sensors changes things significantly. Sensors, together
>with manipulators produce causal relations between the syntactic
>processing inside the machine and what is going on in the real world:
>changes in the world show up as changes in the internal states of the
>machine, and changes in the machine produce changes in the world
>(through the manipulators).

Correlation does not imply grounding. See Harnad's symbol grounding work
or Howard Pattee's semantic closure work.

-Frank


