From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!qt.cs.utexas.edu!yale.edu!spool.mu.edu!agate!stanford.edu!rock!mcnc!aurs01!sheol!throopw Sun May 31 19:04:19 EDT 1992
Article 5926 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!qt.cs.utexas.edu!yale.edu!spool.mu.edu!agate!stanford.edu!rock!mcnc!aurs01!sheol!throopw
>From: throopw@sheol.UUCP (Wayne Throop)
Newsgroups: comp.ai.philosophy
Subject: Re: Grounding: Virtual vs. Real
Summary: still lacking definition, motivation
Message-ID: <4799@sheol.UUCP>
Date: 27 May 92 01:47:39 GMT
Article-I.D.: sheol.4799
References: <1992May25.214006.29965@Princeton.EDU>
Lines: 138

> harnad@phoenix.Princeton.EDU (Stevan Harnad)
> There still seems to be some confusion about the critical role of
> transduction in symbol grounding,  [...]

It seems to me that before we can talk about the critical role
of transduction, we have to to be more specific about just which
transductions are involved in the first place.  To wit:

1. What exactly are the criteria that make up the "Total" in the TTT.
2. Why, in principle, is the TTT an improvement over the TT.
3. Just what is "purely symbolic" anyhow?

Some of these points were covered somewhat in the post I'm responding
to, but not (it seems to me) sufficiently. 

The first point, "what exactly are the criteria", wasn't really
addressed at all.  Presumably the criteria are something like "humanlike
to the limit of human senses", for some reasonable "average" or "normal"
human.  But it would help (help *me* anyway) if some direct examples of
what is included and what is not were given.  Especially cases 
"near the boundary". 

For example, presumably X-rays are out, since that isn't a "bare" human
sense.  Presumably even UV sight is out, since humans that have it
aren't exactly common (eg: cataract patients).  But presumably body
(surface) temperature IS (since it can be detected by touch, as in a
handshake) and the robot's servomotors had better be quiet (since a
high-pitched whine would be detectable). 

But what about a heartbeat? A pulse? Both readily detectable to human
senses (eg: place ear to chest).  Are they relevant? If not, why aren't
they relevant to judging emotional response, as a part of body language?
Speaking of body language, what about pupil dilation, blink frequency,
body posture, gestural frequency, and so on?  Are they valid parts of
what it means for the testing to be "total"?  Over what span of time may
we observe (eg: what if the robot's hair grows slower, or it doesn't
age, or ages too fast, or requires an electrical charge but can't eat). 
Are these things relevant?   If not, on what grounds are social
interactions (eating as a social, communicative activity), or long-term
commitments, ruled out?

And what if the "average" human has the *capacity* to discern a
difference, but most humans would not do so.  Here I mean something like
Feynman's demonstrations of "magic" tricks that were merely applying
scent, heat, moisture, and other cues that people often ignore. 

I am not being frivolous here, I really would like to know.  
I am really in the dark as to just what this TTT involves.

The second point, motivation for TTT as opposed to TT, was
covered somewhat:

> The extra "T" in TTT means
> Total, and it is this T that underlies all Turing-style criteria: There
> are arbitrarily many ways to do PARTS of what a person can do (hence
> the endless stream of "toy" models), so the only way to get the degrees
> of freedom down to normal empirical size is to scale up to Total
> capacity.

That is, the TTT is harder to "cheat" at than the TT.  This is a good
goal.  But no justification has been given that it would not simply tend
to yield more false negatives.  Certainly without nailing down just what
is meant by "total" (eg: are pheromones included?), it isn't as well
defined as the TT is.  And I've seen no adequate justification for the
position that the TTT is significantly harder to pass than the TT (and
hence no justification that the TTT will really eliminate some false
positives, let alone not fall prey to false negatives). 

The third point was addressed only obliquely.  I claim that computers
(for example) are NOT purely symbolic.  For example, consider:

> (And "analog" does not just mean continuous
> as opposed to discrete, but physical, as in the case of an airplane, as
> opposed to symbolic, as in the case of computations that are merely
> interpretable as if they were an airplane.)

This makes no sense to me.  Computations performed by a computer (in,
eg, a fly-by-wire airplane) ARE physical things.  More, ALL computations
performed by computer systems are physical processes.  They aren't
"merely interpretable as if they were [parts of] an airplane", any more
than a physical cable connecting controls to aerodynamic surfaces are
"merely interpretable" as parts of an airplane.

I find the analogy of a computer process as equivalent to the whole
airplane misleading, simply because the interfaces that constitute what
it means "to fly" involve the manipulation of (real or virtual) air. 
But even that is not to say that arbitrarily large parts of the airplane
might not be replaced by computer processes.  One could at least imagine
a physical artifact that treated air molecules, maxwell-demon-like, as
symbolic inputs, and performed a computation on them that resulted in
their final states (interpreted symbolically, of course) having altered
momentum so as to provide lift and thrust in the air. 

( This is, of course, a distant relative to the "Bolivian Economy"
  ploy, in reverse.  That is, it is the inverse of the claim that
  "anything can be interpreted as a computation", ie "any actual,
  realized, computation has a physical reality". )

Why is all this relevant?  Consider:

> One scaled down candidate, however, can already be rejected as not
> having the requisite capacity, and that is a purely computational one,
> even TT-scale, waiting only to be hooked up to some trivial transducers
> so it can DEMONSTRATE its capacity. 

Again I claim, there IS no such a thing as a "purely computational"
candidate, scaled down or otherwise.  Any actual realization of a
computation has a physical reality.

IF the transducers that allow a TT-complex entity to pass the TTT ARE
indeed trivial, then the whole notion of the TTT becomes pretty
uninteresting.  And hence the problem for the TTT: I've seen no
justification for thinking that the distance between TT and TTT is
anything but trivial (compared, say, to the distance between Eliza
and a TT-passing process).  (And that's independent of whether a TT-passing
entity could or could not be entirely composed of computable processes.)

> That system would have TTT capacity
> in about the same sense that a single cell might, if only it were
> connected to the rest of the brain and body; or, to use an analogy
> closer to home, only in the sense that a computer would have
> computational capacity, if only it were plugged in.

Others have found this to be a bad analogy.  I tend to agree.  In
particular, in the above context, it can be seen to beg the question of
whether the distance between the TT and the TTT is large or small. 


( Note that this also connects with the position that computer
  processes, far from being unable to pull off Godelization tricks,
  employ such tricks routinely.  All the way from the fact that by
  necessity their lowest-level "symbols" are "Godel-numbered" on
  top of physical processes, through examples of "diagonalization"
  or dual-interpretation of symbols, such as evolutionary algorithms,
  annealing algorithms, and the Eurisko program, which found a bug
  in the Lisp interpreter/compiler it was running on.  But I digress. )
--
Wayne Throop  ...!mcnc!dg-rtp!sheol!throopw


