From newshub.ccs.yorku.ca!ists!torn.onet.on.ca!utgpu!news-server.csri.toronto.edu!rutgers!usc!zaphod.mps.ohio-state.edu!sample.eng.ohio-state.edu!purdue!news.cs.indiana.edu!arizona.edu!penny.telcom.arizona.edu!bill Tue Jun  9 10:07:38 EDT 1992
Article 6133 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!torn.onet.on.ca!utgpu!news-server.csri.toronto.edu!rutgers!usc!zaphod.mps.ohio-state.edu!sample.eng.ohio-state.edu!purdue!news.cs.indiana.edu!arizona.edu!penny.telcom.arizona.edu!bill
>From: bill@nsma.arizona.edu (Bill Skaggs)
Newsgroups: comp.ai.philosophy
Subject: Re: Transducers
Message-ID: <BILL.92Jun6194350@ca3.nsma.arizona.edu>
Date: 7 Jun 92 00:43:50 GMT
References: <1992Jun6.153132.25456@Princeton.EDU><1992Jun6.163918.24479@news.media.mit.edu>
Distribution: world,local
Organization: ARL Division of Neural Systems, Memory and Aging, University ofArizona
Lines: 46

In article <1992Jun6.163918.24479@news.media.mit.edu> 
minsky@media.mit.edu (Marvin Minsky) writes:

   In article <1992Jun6.153132.25456@Princeton.EDU> 
   harnad@phoenix.Princeton.EDU (Stevan Harnad) writes:
   >I'm beginning to suspect, based on the difficulty people seem to have
   >even in UNDERSTANDING it (let alone accepting it) that my point about
   >my being a transducer might be a subtler and more profound one than I
   >had thought.

   [ . . . much deleted . . . ]   If not, I shall regretfully have to 
   conclude that your idea is indeed too profound, and put it into my 
   kill list.


This sort of suggestion always seems to irritate people, but,
regardless, I want to suggest that this controversy is pretty much
unresolvable.

Stevan's line is that i) the Turing Test is not sufficient to prove
intentionality, but ii) the Total Turing Test *is* sufficient.

"Intentionality" is the relationship between thoughts and the things
in the world that they are about.  To make his claim plausible, Stevan
must show that there is some aspect of this relationship that
necessarily follows from the TTT but not from the TT.  Clearly there
*is* such an aspect: Stevan calls it "transduction", but even if we
don't like the word, we still have to admit that a system with robotic
capabilities relates to the objects of *our* intentionality in a
different way than a system without such capabilities: it can sense
and manipulate those objects, rather than merely talking about them.

Therefore the intentionality of a machine that can pass the TTT is more
strongly related to human intentionality than that of one that can
only pass the TT.

The next question, though, is whether this kind of relationship is
*important*.  I don't think there can be any universal answer, because
importance is determined by the intended purpose of the machine.  If
the machine is designed to be a conversation partner, then TT
capability is sufficient; if it is designed as, say, a gymnastics
instructor, some degree of TTT capablility is necessary.  If it is
merely designed to be "conscious", it all comes down to yet another
hopeless battle over terminology.

	-- Bill


