From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!uunet!mcsun!uknet!edcastle!aiai!jeff Thu Jan  9 10:34:08 EST 1992
Article 2559 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: The Robot Reply (was Re: Searle, again)
Message-ID: <5908@skye.ed.ac.uk>
Date: 8 Jan 92 19:35:50 GMT
References: <2127@ucl-cs.uucp> <91338.113617KELLYDK@QUCDN.QueensU.CA> <5796@skye.ed.ac.uk> <YAMAUCHI.91Dec5235651@heron.cs.rochester.edu> <5825@skye.ed.ac.uk> <309@tdatirv.UUCP>
Reply-To: jeff@aiai.UUCP (Jeff Dalton)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 41

In article <309@tdatirv.UUCP> sarima@tdatirv.UUCP (Stanley Friesen) writes:
>In article <5825@skye.ed.ac.uk> jeff@aiai.UUCP (Jeff Dalton) writes:
>|Looking at it from the inside, the input from sensors is just
>|more squiggle-squiggles and squoggle-squoggles.  That some of
>|them come from a camera while others are written down by a
>|person -- why does adding some of the former suddenly solve
>|the problem for all of the inputs involved?
>
>Looking at it from the inside the input to the brain from sense organs
>is just more dits and dahs travelling along the axons.  That some of them
>come from a photoreceptor while others are derived from a vibration detector
>activated by another person -- where does semantics come from here?
>
>You see, the *exact* same arguments apply to humans.

Actually no.  If Searle is right that without sensory input there
is no understanding in computers by virtue of their running the
right program, why would adding sensors cause understanding to
appear?  Why does it matter that some of the squiggles come from
sensors?

Note the "if Searle is right about ..." part.  The point is not
"is Searle right about ..." but *if* he's right, what difference
do sensors make?

There is no equivalent supposition that humans have no understanding
without sensors.  Of course, sensors help in learning.  But if a
person was in a Turing Test, the person can ignore everything except
what's coming in on the teletype and still understand what's being
said.  A computer in the same situation is just the case we're
supposing Searle is right about.

Of course, a computer might learn, with the aid of sensors, and
then be put in the TT.  But the same kind of information could
have been in the program from the start.  So -- of Searle is
right about the case with no sensors -- he's right about the
case where the information was in the program from the start;
unless it somehow matters that the sensors are still connected
even though being ignored.

-- jd


