From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!sdd.hp.com!cs.utexas.edu!uunet!psinntp!scylla!daryl Thu Jan 16 17:19:17 EST 1992
Article 2601 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!sdd.hp.com!cs.utexas.edu!uunet!psinntp!scylla!daryl
>From: daryl@oracorp.com
Subject: Re: The Robot Reply
Message-ID: <1992Jan9.181611.834@oracorp.com>
Organization: ORA Corporation
Date: Thu, 9 Jan 1992 18:16:11 GMT

Jeff Dalton writes:

> If Searle is right that without sensory input there is no
> understanding in computers by virtue of their running the right
> program, why would adding sensors cause understanding to appear? Why
> does it matter that some of the squiggles come from sensors?

If Searle claims that what is missing in a syntactic simulation of a
mind is that there is no causal connection between the words being
manipulated and the real-world objects to which the words refer, then
the use of sensors changes things significantly. Sensors, together
with manipulators produce causal relations between the syntactic
processing inside the machine and what is going on in the real world:
changes in the world show up as changes in the internal states of the
machine, and changes in the machine produce changes in the world
(through the manipulators).


> Note the "if Searle is right about ..." part. The point is not
> "is Searle right about ..." but *if* he's right, what difference
> do sensors make?

I think the point of the Robot Reply, the Systems Reply, etc. is that
Searle is *not* right; that there is no essential difference between
the understanding of a human and the understanding of a computer (or
at least Searle hasn't proved that there is an essential difference).
(The lack of sensors is obviously not an essential difference, since
it is easy enough to add them.)

> There is no equivalent supposition that humans have no understanding
> without sensors.  Of course, sensors help in learning.  But if a
> person was in a Turing Test, the person can ignore everything except
> what's coming in on the teletype and still understand what's being
> said.  A computer in the same situation is just the case we're
> supposing Searle is right about.

If you claim that understanding means having a causal connection
between the words we use and the real-world objects they represent,
then disconnecting a human from the real world would similarly
eliminate *human* understanding. I can have a conversation about
apples, or hamburgers, or whatever, but if the only input device for
the human is a terminal, then he has no way of knowing that my words
have any connection with the real world. I could be lying about eating
a hamburger, or whatever.

Daryl McCullough
ORA Corp.
301A Harris B. Dates Dr.
Ithaca, NY 14850-1313


