From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uwm.edu!linac!att!princeton!phoenix.Princeton.EDU!harnad Sun May 31 19:04:22 EDT 1992
Article 5931 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uwm.edu!linac!att!princeton!phoenix.Princeton.EDU!harnad
>From: harnad@phoenix.Princeton.EDU (Stevan Harnad)
Subject: Re: Grounding: Virtual vs. Real
Message-ID: <1992May27.043004.28647@Princeton.EDU>
Originator: news@ernie.Princeton.EDU
Sender: news@Princeton.EDU (USENET News System)
Nntp-Posting-Host: phoenix.princeton.edu
Organization: Princeton University
References: <1992May25.214006.29965@Princeton.EDU> <4799@sheol.UUCP>
Date: Wed, 27 May 1992 04:30:04 GMT
Lines: 90


In article <4799@sheol.UUCP> throopw@sheol.UUCP (Wayne Throop) writes:

>1. What exactly are the criteria that make up the "Total" in the TTT?

Indistinguishability from a real person in both sensorimotor and
linguistic competence and performance. For both the TT and the TTT the
critical constraint was that the candidate must not just be able
to do things that are SIMILAR to what a real person can do, or just
PART of what a real person can do. It must be TOTALLY indistinguishable
in every relevant respect. (What is relevant is in part an empirical
question about actual human capacity and about our intuitive ability
to judge when they are or are not being displayed.) To be safe, it's
best to capture more rather than less, and worry later about how to
scale it down to an aphasic, apraxic, paralyzed, blind, deaf, or
severely retarded person's capacities. That's all part of the thrust
of the "Totality" constraint (in ALL forms of Turing Testing).

>2. Why, in principle, is the TTT an improvement over the TT?

Because people can do more than just speak, because there are
other relevant give-aways to not being Totally indistinguishable
than just pen-pal interactions, and because the TT (passed by 
a computer alone, in virtue of doing implementation-independent
symbol manipulation) has been invalidated by Searle's Argument
as well as the Symbol Grounding Problem, whereas the TTT is
immune to both.

>3. Just what is "purely symbolic" anyhow?

Syntactic and implementation-independent. It's not that implemented
computations are nonphysical, but that the specifics of the physics
are irrelevant because radically different physical implementations
may implement the same computations.

>I find the analogy of a computer process as equivalent to the whole
>airplane misleading, simply because the interfaces that constitute what
>it means "to fly" involve the manipulation of (real or virtual) air. 
>But even that is not to say that arbitrarily large parts of the airplane
>might not be replaced by computer processes.  One could at least imagine
>a physical artifact that treated air molecules, maxwell-demon-like, as
>symbolic inputs, and performed a computation on them that resulted in
>their final states (interpreted symbolically, of course) having altered
>momentum so as to provide lift and thrust in the air. 

Don't think of smart (i.e., computer aided) airplanes flying (that
flying is still real) and don't think of flight simulation for pilots
(because their minds are real). Think of a simulated airplane in
simulated air purely internal to a computer. That's where none of the
right stuff is going on -- even if it's computationally equivalent to
the real thing.

>Again I claim, there IS no such a thing as a "purely computational"
>candidate, scaled down or otherwise.  Any actual realization of a
>computation has a physical reality.

But none of the physical particulars of the realization are relevant;
only the formal symbolic (syntactic) properties are.

>I've seen no
>justification for thinking that the distance between TT and TTT is
>anything but trivial (compared, say, to the distance between Eliza
>and a TT-passing process).  (And that's independent of whether a TT-passing
>entity could or could not be entirely composed of computable processes.)

The TT passed only by computation has, I suggest, already been shown
to be ungrounded. But it is only a hypothesis that it could be
passed by computation alone. Perhaps it could not. It is more likely
that the only system that could pass the TT would be a hybrid one
that could also pass the TTT. But even if not, even if a 
computational simulation of the robot that could pass the TTT could
actually pass the TT, it would only be an oracle, just as a planetary
simulation would be: It could predict all the implemented robot's
(or real solar system's) movements, words and thoughts would be
(just as it could predict all planetary interactions, motions
and positions), but it would not really be moving or thinking.

Where is the big step taken? Intellectually, it would be in designing
the purely symbolic oracle (if that could be done); the step from there
to building a real robot along the lines specified by the oracle would
be a much less profound one -- intellectually. But if the question is
about the properties of the computational oracle vs the real robot, the
difference would be like night and day (as in the case of the
computational and real solar system), with nobody home in the
virtual-TTT (hence actually just TT) computational model and somebody
home in the real TTT robot.
-- 
Stevan Harnad  Department of Psychology  Princeton University
harnad@clarity.princeton.edu / harnad@pucc.bitnet / srh@flash.bellcore.com 
harnad@learning.siemens.com / harnad@elbereth.rutgers.edu / (609)-921-7771


