From newshub.ccs.yorku.ca!torn!cs.utexas.edu!swrinde!zaphod.mps.ohio-state.edu!pacific.mps.ohio-state.edu!linac!att!princeton!phoenix.Princeton.EDU!harnad Wed Sep 16 21:22:00 EDT 1992
Article 6807 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai:4209 comp.robotics:2116 comp.ai.philosophy:6807
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!swrinde!zaphod.mps.ohio-state.edu!pacific.mps.ohio-state.edu!linac!att!princeton!phoenix.Princeton.EDU!harnad
>From: harnad@phoenix.Princeton.EDU (Stevan Harnad)
Newsgroups: comp.ai,comp.robotics,comp.ai.philosophy
Subject: Re: Turing Indistinguishability is a Scientific Criterion
Message-ID: <1992Sep7.150457.1037@Princeton.EDU>
Date: 7 Sep 92 15:04:57 GMT
References: <1992Sep6.200121.4383@Princeton.EDU> <1992Sep7.003832.3221@memstvx1.memst.edu>
Sender: news@Princeton.EDU (USENET News System)
Followup-To: comp.ai.philosophy
Organization: Princeton University
Lines: 121
Originator: news@nimaster
Nntp-Posting-Host: phoenix.princeton.edu

In article <1992Sep7.003832.3221@memstvx1.memst.edu> langston@memstvx1.memst.edu (Mark C. Langston) writes:
>
>Does one actually infer that an agent with which one is comunicating has a
>mind, or does one assume the fact a priori, before attempting the
>communication? (e.g., what would be the point in trying to communicate with
>a rock, if I did not assume it was capable of a response?)

Whether it's a priori or a posteriori at the outset makes no difference.
My interpretation of Turing is that it's arbitrary to treat candidates
differently if you can't tell their performances apart. (If you think
this is not a strong criterion, try corresponding with a rock for a while.
Your assumptions or lack of them will have nothing to do with its failure
on the TT.

>By asking a machine to achieve TT-indistinguishable performance, one would
>already be asking more from the machine than one would of a person, similar
>to asking a 3-year old to solve third order differential equations...

My methodological point was that in the special case of trying to capture
mental states (which are not IDENTICAL to TTT-capacity, but merely, one
hopes, reliably predicted by it) it is arbitrary to aim at capturing less
that TOTAL capacity a priori. So forget about capturing the development or
pathology or interindividual variability of the mind until you've
captured the mind to begin with. Otherwise it's like trying to model only
how a plane falls, taxis or hops: that's not modelling flight.

>[People have] certain performance boundaries, and the investigator
>assumes similar boundaries... the judgement of
>TT-indistinguishability is relative, based on the criteria used to develop
>the assumed performance boundaries.  "Total indistinguishability from
>human performance" is not a valid criterion.  What about a blind human, or
>an amputee, or a deaf person, or a mute?  Under the TTT, these agents are
>less intelligent.  Is this valid?

As natural as it seems (even Claude Bernard recommended studying
biological organ function by studying its pathologies), I think this
strategy is wrong in mind-modelling because we are actually modelling
something other than functional capacities, even though functional
capacities are our only predictors. Carving out arbitrary, or even
natural boundaries or modules and settling for subtotal functional
capacity is simply reducing (I think drastically) our chances to
capture mental states. (And even Claude Bernard didn't aspire only to a
model of pathology: He just suggested that pathology might give you
hints as to how to model full normal function. That's certainly true of
mind-modelling as well: Gather your hints on how to pass the TTT
wherever you may. But keep your sights on the TTT. Don't just baptize
your pathological model as having a mind because it's gone some of the
negative distance.)

Of course the infant and the severly handicapped have minds; so do
people who are asleep or in a (reversible) coma. But capturing
performance indistinguishability with the latter is clearly not
helpful; I'm suggesting that excusing your model for being infantile
and handicapped but nevertheless having a mind is not a hopeful
methodological strategy. We are not just trying to use excuses to lull
us into thinking more of our models than they warrant. Only TTT
performance has face validity. Besides, to capture even an infant
Turing-indistinguishably is to give it the capacity to grow up and pass
the TTT. And to capture disabled performance Turing-indistinguishably
is either to capture also its capacity for recovery, or at least its
backward path from full to partial function. Either way, the TTT is
part of the equation.

>What about unmanipulatable, intangible objects? (e.g., love? or, mind?)

Grounding begins with concrete objects. The road from grounded concrete
symbols to grounded higher-order symbols is the road from (grounded)
"horses" and (grounded) "stripes" to "Zebras" = "horses" with "stripes"
(which inherits the grounding), and on to "stripedness," "unicorns,"
"goodness," "truth," and "beauty," along the same grounded path.

>And, to use your pen-pal example:  Assume an agent writing such a letter.
>...re-reading it at a later date.
>The letter has taken on new meaning... [and the recipient also]
>has a completely different symbolic representation of said text.
>In what way, then, is the meaning intrinsic to the mechanism(s) used
>in the interpretation?  Has either agents brain undergone significant 
>physical change?  Have the underlying mechanisms changed?  Or has the 
>knowledge base of each agent simply changed over time and affected the
>mechanisms used to interpret the text?

I see no problem here. This is all stuff that can happen with a real
pen-pal and so must be possible with a Turing-Indistinguishable pen-pal
too. As to what's going on inside to make it possible -- build the 
TT-passer first, then we'll talk. Dubbing a lesser candidate a
"knowledge base" is just prejudging the issue (or begging the question).

>Hmmm...a plane that can only fall (a parachute) jump (a VTOL aircraft), or
>taxi (a car) is no plane at all; (yes, but they are each useful in other
>aspects, and not just on the way to developing full-fledged (pardon the pun)
>flight.)  and gliding is pertinent (the space shuttle) only if it can scale
>up to autonomous flight.

Missing the point again. The article was about cognitive modelling, not
about building useful devices (i.e., about Cog Sci, not just AI).

>> SH: You don't have to be able to define
>> intelligence (knowledge, understanding) in order to see that people have
>> it and today's machines don't. Nor do you need a definition to see that
>> once you can no longer tell them apart, you will no longer have any
>> basis for denying of one what you affirm of the other.
>
>Two points:
> 1) the definition, although 'not needed' as claimed above, would still
>    be relative to the observer's/experimenter's criteria for intelligent

What criteria? Did my grandmother use "criteria" or definitions
throughout her long pre-AI life in failing to distinguish all the
people and animals she met from machines with identical performance?

> 2) While definitions are not necessary at the extremes, I think a consensus
>    on criteria for selection (more stringent than those set forth in the TT
>    it would be nice if we could know that we're headed in the right direction.
The criteria could not help but be arbitrary and self-fulfilling. Not
very helpful, in my view...

-- 
Stevan Harnad  Department of Psychology  Princeton University 
& Lab Cognition et Mouvement URA CNRS 1166 Universite d'Aix Marseille II
harnad@clarity.princeton.edu / harnad@pucc.bitnet / srh@flash.bellcore.com 
harnad@learning.siemens.com / harnad@gandalf.rutgers.edu / (609)-921-7771


