From newshub.ccs.yorku.ca!torn!cs.utexas.edu!uunet!gatech!nntp.msstate.edu!memstvx1!langston Wed Sep 16 21:21:57 EDT 1992
Article 6803 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai:4205 comp.robotics:2114 comp.ai.philosophy:6803
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!uunet!gatech!nntp.msstate.edu!memstvx1!langston
>From: langston@memstvx1.memst.edu (Mark C. Langston)
Newsgroups: comp.ai,comp.robotics,comp.ai.philosophy
Subject: Re: Turing Indistinguishability is a Scientific Criterion
Message-ID: <1992Sep7.003832.3221@memstvx1.memst.edu>
Date: 7 Sep 92 00:38:32 -0600
References: <1992Sep6.200121.4383@Princeton.EDU>
Distribution: world
Organization: Memphis State University
Lines: 233

In article <1992Sep6.200121.4383@Princeton.EDU>, harnad@phoenix.Princeton.EDU (Stevan Harnad) writes:
> It is important to understand that the Turing Test (TT) is not, nor was
> it intended to be, a trick; how well one can fool someone is not a
> measure of scientific progress. The TT is an empirical criterion: It
> sets AI's empirical goal to be to generate human-scale performance
> capacity. This goal will be met when the candidate's performance is totally
> indistinguishable from a human's. Until then, the TT simply represents
> what it is that AI must endeavor eventually to accomplish scientifically.
> 
> Pen-Pals Versus Robots
> 
> In my own papers I have tried to explain how trickery, deception and
> impersonation have nothing at all to do with the scientific import of
> Turing's criterion (Harnad 1989, 1991). AI is not a party game. The
> game was just a metaphor. The real point of the TT is that if we had a
> pen-pal whom we had corresponded with for a lifetime, we would never
> need to have seen him to infer that he had a mind. So if a machine
> pen-pal could do the same thing, it would be arbitrary to deny it had a
> mind just because it was a machine. That's all there is to it!


Does one actually infer that an agent with which one is comunicating has a
mind, or does one assume the fact a priori, before attempting the
communication? (e.g., what would be the point in trying to communicate with
a rock, if I did not assume it was capable of a response?)


> 
> This entirely valid methodological point of Turing's is based on the
> "other minds" problem (the problem of how I can know that anyone else
> but me actually has a mind, actually thinks, actually has intelligence
> or knowledge -- these all come to the same thing): It is arbitrary to
> ask for more from a machine than I ask from a person, just because it's
> a machine (especially since no one knows yet what either a person or a
> machine REALLY is). So if the pen-pal TT is enough to allow us to
> correctly infer that a real person has a mind, then it must by the same
> token be enough to allow us to make the same inference about a
> computer, given that the two are totally indistinguishable to us (not
> just for a 5-minute party trick or an annual contest, but, in
> principle, for a lifetime). Neither the appearance of the candidate nor
> any facts about biology play any role in my judgment about my human pen
> pal, so there is no reason the same should not be true of my
> TT-indistinguishable machine pen-pal.

By asking a machine to achieve TT-indistinguishable performance, one would
already be asking more from the machine than one would of a person, similar
to asking a 3-year old to solve thrid order differential equations in its
head.
  One assumes a certain level of response from each, and one should be aware
of the limitations of that response - I am not asserting that TT-indistinguish-
able performance is impossible.  Instead, I am implying that each such agent
has certain performance boundaries, and the investigator assumes similar
boundaries.  When the agent violates the assumed boundaries, this does not
indicate violation of the actual boundaries - it instead indicates the
necessary adjustment of the assumed boundaries.  The development of TT-
indistinguishable performance would imply a boundary overlap, regardless
of the mechanism creating said overlap.  Simply put:  the judgement of
TT-indistinguishability is relative, based on the criteria used to develop
the assumed performance boundaries.  "Total indistinguishability from
human performance" is not a valid criterion.  What about a blind human, or
an amputee, or a deaf person, or a mute?  Under the TTT, these agents are
less intelligent.  Is this valid?


> 
> Now, although I too am critical of the TT, I think it is important that
> its logic -- which was only implicit in Turing's actual writing --
> should be made explicit, as I have tried to make it here and in my
> other writings, so we can see clearly the methodological basis for his
> proposed criterion. Elsewhere I have gone on to take issue with the TT
> on the basis of the fact that humans also happen to have a good deal
> more performance capacity over and above their pen-pal capacity. It is hence
> arbitrary and equivocal to focus only on pen-pal capacity; but Turing's
> basic intuition is still correct that the only available basis for
> inferring a mind is Turing-indistinguishable performance capacity. For
> TOTAL performance indistinguishability, however, one needs TOTAL, not
> partial, performance capacity, and that happens to call for all of our
> robotic performance capacities too: The Total Turing Test (TTT). And,
> as a bonus, the robotic capacities can be used to GROUND the pen-pal
> (symbolic) capacities, thereby solving the "symbol grounding problem"
> (Harnad 1990), which afflicts the pen-pal version of the TT, but not
> the robotic TTT.**

Is it necessary to submit my pen-pal to the TTT before I can infer that
he/she is a human, or, again, is it assumed?

> 
> --
> ** FOOTNOTE: In a nutshell, the symbol grounding problem can be stated
> as follows: Computers manipulate meaningless symbols that are
> systematically INTERPRETABLE as meaning something. The problem is that
> the interpretations are not intrinsic to the symbol manipulating
> system; they are made by the mind of the external interpreter (as when
> I interpret the letters from my TT pen-pal as meaningful messages).
> This leads to an infinite regress if we try to assume that what
> goes on in MY mind is just symbol manipulation too, because the thoughts
> in my mind do not mean what they mean merely because they are
> interpretable by someone ELSE's mind: Their meanings are intrinsic. One
> possible solution would be to ground the meanings of a system's symbols
> in the system's capacity to discriminate, identify, and manipulate
> the objects that the symbols are interpretable as standing for (Harnad
> 1987), in other words, to ground its symbolic capacities in its robotic
> capacities. Grounding symbol-manipulating capacities in
> object-manipulating capacities is not just a matter of attaching the
> latest transducer/effector technologies to a computer, however. Hybrid
> systems may need to make extensive use of analog components and perhaps
> also neural nets, in order to connect symbols to their objects (Harnad et
> al. 1991; Harnad 1992).

What about unmanipulatable, intangible objects? (e.g., love? or, mind?)
And, to use your pen-pal example:  Assume an agent writing such a letter.
The agent has completed said letter, and is re-reading it at a later date.
The letter has taken on new meaning, over the course of time.  The text of
the letter has not changed, and yet the symbolic representation of the
conveyed meaning has.  The agent the letter was sent to reads the letter, and
has a completely different symbolic representation of said text.  At a later
time, the addressee re-reads the received letter, just as the orginating
agent has.  Again, the agent has a new symbolic representation of the
text.  In what way, then, is the meaning intrinsic to the mechanism(s) used
in the interpretation?  Has either agents brain undergone significant 
physical change?  Have the underlying mechanisms changed?  Or has the 
knowledge base of each agent simply changed over time and affected the
mechanisms used to interpret the text?

> --
> 
> In fact, one of the reasons no computer has yet passed the TT may be that
> even successful TT capacity has to draw upon robotic capacity. A TT
> computer pen-pal alone could not even tell you the color of the flower
> you had enclosed with its birthday letter -- or indeed that you had
> enclosed a flower at all, unless you mention it in your letter. An
> infinity of possible interactions with the real world, interactions of
> which each of us is capable, is completely missing from the TT (and
> again, "tricks" have nothing to do with it).

Again, neither could a blind person.  Is this agent any less intelligent, any
less human?

> 
> Is the Total Turing Test Total Enough?
> 
> Note that all talk about "percentages" in judging TT performance is
> just numerology. Designing a machine to exhibit 100% Turing
> indistinguishable performance capacity is an empirical goal, like
> designing a plane with the capacity to fly. Nothing short of the TTT or
> "total" flight, respectively, meets the goal. For once we recognize that
> Turing-indistinguishable performance capacity is our mandate, the
> Totality criterion comes with the territory. Subtotal "toy" efforts are
> interesting only insofar as they contain the means to scale up to
> life-size. A "plane" that can only fall, jump, or taxi on the ground is
> no plane at all; and gliding is pertinent only if it can scale up to
> autonomous flight.

Hmmm...a plane that can only fall (a parachute) jump (a VTOL aircraft), or
taxi (a car) is no plane at all; (yes, but they are each useful in other
aspects, and not just on the way to developing full-fledged (pardon the pun)
flight.)  and gliding is pertinent (the space shuttle) only if it can scale
up to autonomous flight.

Hmmm.




> 
> The Loebner Prize Competition is accordingly trivial from a scientific
> standpoint. The scientific point is not to fool some judges, some of
> the time, but to design a candidate that REALLY has indistinguishable
> performance capacities (respectively, pen-pal performance [TT] or
> pen-pal + robotic performance [TTT]); indistinguishable to any judge,
> and for a lifetime, just as yours and mine are. No tricks! The real thing!

There have been a few humans (we've all met some) that would not pass the TT,
and only made it because we'd be remiss if we didn't assume at least the
capacity for intelligent thought/action.  No agent performs to its full
capacity all the time, and one with limited contact with the 'real' world
(e.g., one that can only communicate with an agent through text, and one 
whose only contact with the world is though another agent's deigning to
type something to it), can only perform according to what is available for
processing.  Reminiscent to "If a tree falls in the woods..."



> 
> The only open questions are (1) whether there is more than one way to
> design a candidate to pass the TTT, and if so, (2) do we then need a
> stronger test, the TTTT (neuromolecular indistinguishability), to pick
> out the one with the mind? My guess is that the constraints on the TTT
> are tight enough, being roughly the same ones that guided the Blind
> Watchmaker who designed us (evolutionary adaptations -- survival and
> reproduction -- are largely performance matters; Darwinian selection
> can no more read minds than we can).
> 
> Let me close with the suggestion that the problem under discussion is
> not one of definition. You don't have to be able to define
> intelligence (knowledge, understanding) in order to see that people have
> it and today's machines don't. Nor do you need a definition to see that
> once you can no longer tell them apart, you will no longer have any
> basis for denying of one what you affirm of the other.

Two points:
 1) the definition, although 'not needed' as claimed above, would still
    be relative to the observer's/experimenter's criteria for intelligent
    behaviour,
 2) While definitions are not necessary at the extremes, I think a consensus
    on criteria for selection (more stringent than those set forth in the TT
    or the TTT, or at least less vague) would be greatly helpful, if not
    necessary, for advancement from the lowest to the (here) highest extreme.
    Unless one is expecting to evolve an intelligence a la Nature (better have
    a lot of food stashed - its gonna be a long wait), one needs, if not a
    road map, at least a 'thataway' from a local denizen....and confirmation,
    if possible.  "You can't get there from here" does not apply...however,
    it would be nice if we could know that we're headed in the right direction.

Perhaps someone could post a set of 'landmarks' on the way to developing
a system capable of passing the TT or the TTT?


I'm not trying to attack either the TT or the TTT as a whole; I think they are
plausible goals - I'm merely questioning some of the roads along the way.


> -- 
> Stevan Harnad  Department of Psychology  Princeton University 
> & Lab Cognition et Mouvement URA CNRS 1166 Universite d'Aix Marseille II
> harnad@clarity.princeton.edu / harnad@pucc.bitnet / srh@flash.bellcore.com 
> harnad@learning.siemens.com / harnad@elbereth.rutgers.edu / (609)-921-7771
-- 
+--------8<------Cut Here------8<------Cut Here------8<------Cut Here---------+
  Mark C. Langston   |  "Secrecy is the beginning of tyranny."                 
  Psychology Dept.   |  "Always listen to experts.  They'll tell you what can't
  Memphis State U.   |     be done, and why.  Then do it."                      
      "Pftph!"       |           -From the notebooks of Lazarus Long         


