From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!zaphod.mps.ohio-state.edu!cis.ohio-state.edu!rutgers!psinntp!psinntp!dg-rtp!sheol!throopw Sun May 31 19:04:37 EDT 1992
Article 5957 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!zaphod.mps.ohio-state.edu!cis.ohio-state.edu!rutgers!psinntp!psinntp!dg-rtp!sheol!throopw
>From: throopw@sheol.UUCP (Wayne Throop)
Newsgroups: comp.ai.philosophy
Subject: Re: Grounding: Virtual vs. Real
Summary: still unconvincing, but the subject may be mined out
Message-ID: <4814@sheol.UUCP>
Date: 28 May 92 03:39:14 GMT
References: <1992May25.214006.29965@Princeton.EDU> <4799@sheol.UUCP> <1992May27.042826.28187@Princeton.EDU>
Lines: 170

I find this discussion thread remarkably interesting and useful.  
And so I remark upon it.

-> harnad@phoenix.Princeton.EDU (Stevan Harnad)
->> throopw@sheol.UUCP (Wayne Throop)

->>1. What exactly are the criteria that make up the "Total" in the TTT?
-> Indistinguishability from a real person in both sensorimotor and
-> linguistic competence and performence. For both the TT and the TTT the
-> critical constraint was that the candidate must not just be able
-> to do things that are SIMILAR to what a real person can do, or just
-> PART of what a real person can do. It must be TOTALLY indistinguishable
-> in every relevant respect. (What is relevant is in part an empirical
-> question about actual human capacity and about our intuitive ability
-> to judge when they are or are not being displayed.) 

Yes, "in part an emperical question about human capacity".  But an awful
lot is not a question of capacity, and it is these issues that I'm still
wondering about.  For example, it is within human capacity to detect
moisture in the breath of a speaker.  Is this relevant?  It is within
human capacity to see that body language, pupil dilation, and other
nonverbal cues are anomalous, but are they relevant?  It is well within
human capacity to see that the candidate does not eat when it might be
socially appropriate, but again...  is it relevant?  It would likely be
possible to tell the difference between voice produced by forcing air
from the lungs through the vocal system via the diaphram and a speaker
located out of sight in the throat, but again, is this difference
relevant?

I hope it is agreed that these are NOT emperical issues of capacity. 
I'm willing to punt on the "what is an average human" question.  But
what things that average humans can do are *relevant* to the issue of
intelligence?

It seems to me that an entity's ability to hold up one end of a
conversation is relevant, and not much else is.  Specifically, how the
conversation is encoded (whether language, glyphs, pheromones, pixels,
or anything else) is irrelevant.  Is there any particular reason to
suppose something else, like the number of fingers, or hair color, or
whatnot, IS relevant?

And I also hope it is agreed that this is not a question of "scaling
back" an "average" human interaction.  It is a question of what is
considered relevant in human communication *within* average or nomral
capability.  Even a fully capable human who only stands there and winks
in morse code is very likely to be thought to be intelligent (especially
if that human signals (in winked morse code) a good reason for 
standing still like that). 

So again, I'm clear that we're talking about some reasonably defined
average, or normal, or mean of human capabilities, and relevance related
to these capabilities can be emperically determined.  But it seems
incredible to me that the TTT really includes such silly things as eye
blink rate, or limb rigidity, or other things that can clearly be
detected by most humans, as relevant factors in the question of
intelligence.  Yet that's what seems to be required.  I would need
additional persuasion to overcome my incredulity on this point. 

->>2. Why, in principle, is the TTT an improvement over the TT?
-> Because people can do more than just speak, because there are
-> other relevant give-aways to not being Totally indistinguishable
-> than just pen-pal interactions, and because the TT (passed by 
-> a computer alone, in virtue of doing implementation-independent
-> symbol manipulation) has been invalidated by Searle's Argument
-> as well as the Symbol Grounding Problem, whereas the TTT is
-> immune to both.

I disagree.  The TTT seems just as vulnerable to Searle's CR scenario as
is the TT.  If the transducers are trivial and immediately translate to
symbolic inputs to a symbol-crunching process, then a TTT testee would
be in the same boat as a TT testee, with shallow transduction and
essentially all symbol manipulation.  Note that the TT testee cannot be
without transducers, because no realization of symbol manipulation can
be totally transducer-free.  The notion of "implementation-independent
symbol manipulation" depends on implementation-*de*pendent transduction
to the symbol engine.  Thus the only non-symbolic difference between the
TT testee and the TTT testee would be which transducers it is hooked to. 

Objections to the relevance of this scenario have already been raised,
but I do not see that they are problems.  As I understand the objections,
they ammount (roughly) to

   - The TTT transduction might be deep, rather than shallow.

     This is true.  But then again they may not, and the TTT
     itself is neutral.  What happens within the entity at very
     small "depths" is not a part of the TTT-checkable interface.
     Therefore, in the same sense that the TTT insists that the
     TTT testee may not be a "symbolic engine" inside, it also
     *may* *be*, and the TTT has no way to tell.

   - It may be impossible to produce TTT-capability with such
     shallow transducers, but possible to produce TT-capability.

     Again this is true.  But again, the reverse has not been ruled out. 
     Specifically interesting on this point, it hasn't even been ruled out
     that *human* TTT capability is due to symbolic engines with shallow
     transducers.  Nerve impulses and the CNR processing related to them may
     be a physically encoded symbol system.

->>3. Just what is "purely symbolic" anyhow?
-> Syntactic and implementation-independent. It's not that implemented
-> computations are nonphysical, but that the specifics of the physics
-> are irrelevant because radically different physical implementations
-> may implement the same computations.

Granted.  But consider:

-> Don't think of smart (i.e., computer aided) airplanes flying (that
-> flying is still real) and don't think of flight simulation for pilots
-> (because their minds are real). Think of a simulated airplane in
-> simulated air purely internal to a computer. That's where none of the
-> right stuff is going on -- even if it's computationally equivalent to
-> the real thing.

I'm willing to agree that a symbolic implementation must be coupled with
IO symbol translation into the domain of the problem in order to be
considered to have the "X capability" for any X.  That is, a symbolic
engine that "flies" must be coupled to air in order to have the
capability to fly, a symbolic simulation of a kidney must be connected
to real live fluids and molecules, perhaps maxwell-demon-like, in order
to have the capability to filter blood.  Running the symbolic engine in
an implementation that is not coupled with the domain of interest does
indeed not have "X capability" for whatever X we consider.  I'll even
agree that the abstract symbol system isn't what should be considered to
have "understanding", but rather some actual realization of that symbol
system.  After all, I had said in an earlier thread that it didn't seem
to me to be "computers" or even "programs" that can be said to
understand, but "processes", which involve a specific instantiation of
computation. 

So far so good, I hope.

So the question that remains here is, what is the domain of interest in
which "intelligence capability" can be demonstrated.  I don't see why
connections to pixels, keyboard, speaker and microphone aren't more than
sufficient for the purpose. 

-> But none of the physical particulars of the realization are relevent;
-> only the formal symbolic (syntactic) properties are.

No, I think that the physical particulars ARE relevant, in the sense
that any particular computational process includes these physical
details.  The fact that some other computational process has identical
formal properties doesn't make them the same process.  It only makes them
realizations of the same symbol system.

-> Where is the big step taken? Intellectually, it would be in designing
-> the purely symbolic oracle (if that could be done); the step from there
-> to building a real robot along the lines specified by the oracle would
-> be a much less profound one -- intellectually. But if the question is
-> about the properties of the computational oracle vs the real robot, the
-> difference would be lkek night and day (as in the case of the
-> computational and real solar system), with nobody home in the
-> virtual-TTT (hence actually just TT) computational model and somebody
-> home in the real TTT robot.

It may boil down to a question of identity.  Is the
oracle-running-unconnected the "same entity" as the
oracle-running-against-the-real-world?  Certainly the physical
process considered to be "inside" the entity is different in
the two cases, which forms a basis for a claim that they differ.

But same or different, the small scale of the differences between the
two cases makes calling one self-aware and the other "nobody home" still
seems very strange indeed.  (Assuming it is possible to partition the
work this way, of course.)

--
Wayne Throop  ...!mcnc!dg-rtp!sheol!throopw


