From newshub.ccs.yorku.ca!ists!torn.onet.on.ca!utgpu!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!mips!mips!decwrl!mcnc!aurs01!throop Tue Jun  9 10:07:55 EDT 1992
Article 6157 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!torn.onet.on.ca!utgpu!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!mips!mips!decwrl!mcnc!aurs01!throop
>From: throop@aurs01.UUCP (Wayne Throop)
Newsgroups: comp.ai.philosophy
Subject: Re: Transducers
Message-ID: <60791@aurs01.UUCP>
Date: 8 Jun 92 17:39:20 GMT
References: <1992Jun6.153132.25456@Princeton.EDU> <1992Jun6.163918.24479@news.media.mit.edu> <BILL.92Jun6194350@ca3.nsma.arizona.edu>
Sender: news@aurs01.UUCP
Lines: 55

> bill@nsma.arizona.edu (Bill Skaggs)
> Stevan's line is that i) the Turing Test is not sufficient to prove
> intentionality, but ii) the Total Turing Test *is* sufficient.

This is also my interpretation of Stevan's position (or, one part of
it anyway).  I have two difficulties with this position.

   1) I don't know just what constitutes the TTT.  With the TT, it
      is spelled out: teletypes, all other details hidden.   With the
      TTT, I have no good guide as to exactly what is irrelevant,
      and what is relevant.  The guide of "distinguishable by the
      "normal" or "average" or "to be expected" (or whatever)
      human senses is simply not anywhere near enough of a guide.

   2) I don't see any difference in principle between the TT and
      the TTT.  They merely differ in what details of implementation
      of the testee are allowed to be hidden from the judges.  This
      presumably makes the TTT a "harder" test to pass, but I see
      no justification for the extra difficulty being relevant, any
      more than the extra difficulty of, say, requiring that the
      teletypes in the TT should be capable of growing hair
      indistinguishable from human hair.  That's a harder task
      all right, but I see no indication of its relevance.

Bill goes on to address the second point:

> "Intentionality" is the relationship between thoughts and the things
> in the world that they are about.  To make his claim plausible, Stevan
> must show that there is some aspect of this relationship that
> necessarily follows from the TTT but not from the TT.  Clearly there
> *is* such an aspect: Stevan calls it "transduction", but even if we
> don't like the word, we still have to admit that a system with robotic
> capabilities relates to the objects of *our* intentionality in a
> different way than a system without such capabilities: it can sense
> and manipulate those objects, rather than merely talking about them.

Well, "everyone talks about the weather, but nobody does anything
about it".  Does that mean that human intentionality about the
weather somehow differs from our intentionality about our clothing
or other objects we CAN manipulate?  This seems very odd to me.

Further, I ask again, what is the difference IN PRINCIPLE between
a computer being able to turn a pixel on and off, and thus manipulate
the world by the pattern of light shed upon it, and a robot being able
to turn a servomotor on and off, and thus manipulate the world by
the pressure of a robotic finger "shed upon it"?

In other words, it has always seemed to me that computers CAN 
"manipulate those objects, rather than merely talking about them",
and I don't understand why those who disagree with this position do so.

The difference between a computer and a robot is merely which effectors
and sensors are considered part of the entity.

Wayne Throop       ...!mcnc!aurgate!throop


