From newshub.ccs.yorku.ca!ists!torn.onet.on.ca!utgpu!news-server.csri.toronto.edu!rpi!gatech!mcnc!aurs01!throop Mon Jun 15 16:04:24 EDT 1992
Article 6178 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!torn.onet.on.ca!utgpu!news-server.csri.toronto.edu!rpi!gatech!mcnc!aurs01!throop
>From: throop@aurs01.UUCP (Wayne Throop)
Newsgroups: comp.ai.philosophy
Subject: Re: Transducers
Message-ID: <60795@aurs01.UUCP>
Date: 9 Jun 92 16:13:34 GMT
References: <1992Jun9.051649.9894@cs.ucf.edu>
    <BILL.92Jun8213911@ca3.nsma.arizona.edu>
    <1992Jun08.225734.32166@spss.com>
    <BILL.92Jun8150837@cortex.nsma.arizona.edu>
Sender: news@aurs01.UUCP
Lines: 234

> bill@nsma.arizona.edu (Bill Skaggs)
> Message-ID: <BILL.92Jun8150837@cortex.nsma.arizona.edu>
> But how a machine
> might be then programmed is a question.  Turing provides an argument
> that programming the machine by hand would be impractical, so he
> suggests having it learn.

The "programming without learning" route to AI, to be a tractable
problem, must assume that there is some discernible level where
internal tokens are manipulated, which can be separated from the
perceptual system by some relatively clean boundary.

It seems clear to me that there isn't a great deal of emperical support
for that assumption.  But then to simply go on to claim that the
"learning" route to AI is the only viable one is simply begging the
question of whether such a boundary exists.

Now I'm NOT trying to sell anybody on the existence of such a
boundary.  I'm merely playing the role of skeptic, and so far I don't
see why the "programming" route to AI would result in a system without
grounding.  Harnad says that grounding is about *capability*, and not
about how that capability was gained, and I agree with him.

> In any case, I can't get too excited about "in principle" arguments.
> As far as I can see, saying something is true in principle means
> saying it would be true if you could change the inconvenient part of
> reality.

Hmpf.  Agreed.  But note that it is sometimes possible to change or
overcome the inconvenient part of reality, as in "in principle, people
can walk on the moon" or "in principle, a program could be written
which can beat all but the top few elite human chess players", or "in
principle, it is possible to fly faster than sound".  Contrasted with
"in principle it is not possible to have a complete, consistent formal
system" or "in principle, it is not possible to travel faster than
light".  (The "in principle" is being used somewhat differently between
these last two cases, but still...)

The point is, the statement "in principle, it is impossible to create
an entity with grounded symbol manipulating capability without having
that entity go through a period of learning" is a very different
statement than "there are engineering difficulties in attempting to
create an entity with grounded symbol manipulating capability without
having that entity go through a period of learning".

I suspect (as Neil Rickert points out), that any solution,
learning-based or not, has potential for pointing to solutions to the
engineering difficulties of non-learning-based cases, while that
wouldn't be true if it were impossible in principle.

> markrose@spss.com (Mark Rosenfelder)
> Message-ID: <1992Jun08.225734.32166@spss.com>
>>>  a system with robotic
>>> capabilities relates to the objects of *our* intentionality in a
>>> different way than a system without such capabilities: it can sense
>>> and manipulate those objects, rather than merely talking about them.>
>> Does that mean that human intentionality about the
>> weather somehow differs from our intentionality about our clothing
>> or other objects we CAN manipulate?  This seems very odd to me.
> I doubt it really seems odd to you. :)

I was keying on the "sense AND manipulate".  And it really does seem
odd to me our *intentionality* towards objects we can't manipulate
would be different from that towards objects we can manipulate.

> Surely, as a general principle, 
> you'd agree that your knowledge of something is lessened without direct
> experience or without manipulative experience. 

But, "direct experience" is only the way humans acquire knowledge (or at
least, certain kinds of knowledge).  To say that this is the only way
to gain knowledge is to beg the question of whether knowledge can be
acquired in any other way.  And even as a practicing human, I don't
think I'd have to fall off a cliff in order to have a pretty certain,
visceral knowledge that I intend never to do so.  I haven't experienced
it, I haven't manipulated it (does screaming my throat raw and
thrashing my limbs about hysterically count as "manipulating"?), but I
have some pretty firm intentions about it.

I still agree with Harnad that the historical fact of how knowledge was
acquired is irrelevant.  What matters is the capability to interact with
the world.  (I seem to disagree about what counts as interacting with
the world.)

>> The difference between a computer and a robot is merely which effectors
>> and sensors are considered part of the entity.
> I think this is a bit disingenuous.  Do you really think that the computer
> you are reading this text on is just a handicapped robot?

Yes, I really think that.  I think that because it is quite typical
for computers very much like the one I'm typing into to have voice,
camera, and servomotors (limbs and wheels) plugged in.  The result is
quite typically referred to as a "robot".  Perhaps someone will explain why
they think differently.

> Programming an intelligent robot would be a very different task from 
> programming a computer to pass a (teletype) Turing Test.  Much of the robot's 
> program (as we could predict from our knowledge of the brain) would be 
> dealing with interpreting sensory input, driving motor output, and 
> controlling the robot's internal physical functionality.  The purely 
> linguistic portion of the robot's program might be small by comparison,
> and its design might be intimately affected by the interfaces to the
> sensorimotor capacity.

Well, yes and no.  Certainly yes in terms of percentage of brain tissues
known to deal with the sensorimotor system.  (Though even there, the
parts of the brain that seem to deal in purely linguistic and other
symbolic matters aren't really all that small.)

But I think "no" in terms of both "intelligence" and "consciousness".
Certainly, sensorimotor systems are more nearly "trivial" in the
computer hacker's sense (that is, "trivial" is anything that is already
solved, as in "The four-color map theorem is trivial to prove.") than
the general coordination of perception, the process of generating
language and making sense.

And the "no" point is, I think, generally supported by the way we view
people with sensorimotor disruptions as opposed to those with
disruptions which cause them to be unable to "make sense" at a higher
level.  For example, someone with aphasia and unable to speak is often
still thought intelligent (based, perhaps, on other means of
communication in which they can participate), but someone able to
communicate and put together very sophisticated speech acts is often
NOT thought to be "intelligent" (eg: a person who obsesses, or who's
speech consists entirely of digressions, or whatnot).

Thus, EVEN IF large parts of the nervous system and brain are
devoted to sensorimotor function (and I don't dispute this at all),
that in no way means that sensorimotor competence is what we mean
by "intelligence" or "consciousness".

> Harnad may or may not be right about transducers being necessary for symbol
> grounding.  But surely his insistence on the importance of robotic
> interaction with the world is only common sense. 

Well, obviously it isn't common to me.  My position is that robotic
performance isn't how I judge humans (eg: Stephen Hawking), so I see no
reason to judge potential AI implementations in a way I don't judge
humans.

> bill@nsma.arizona.edu (Bill Skaggs)
> Message-ID: <BILL.92Jun8213911@ca3.nsma.arizona.edu>
> A question such as, "Scrunch up the palm of your
> hand, and describe the folds that you see," would cause it great
> difficulty, unless it were connected to an impossibly detailed
> simulation of the real world.

How do you get to that conclusion?  If I were a control in
a TT test, and received that question, I'd probably answer something
like "I regard that question as cheating.  The whole idea of your
being unable to see me is so you can't tell if I have hands or
an RS232 port.  So I think I'll respectfully decline to look
at my hands and answer this question.".   Or something like that.

I think that answer is a reasonable one, and it is certainly
available to a computer without an "impossibly detailed simulation".

Even a human "control" testee might have to say "I'm sorry,
I can't answer that, my arms were blown off in a mining accident,
I'm typing by hunt-and-peck with a prosthetic mouthpiece.".

Further, if a computer had access to a medical images database,
it could probably come up with a pretty good description... again,
without recourse to an "impossibly detailed simulation".

> I think that if this line of reasoning is followed out, the inevitable
> result is that a machine Turing-equivalent to a human must either be
> far more intelligent than any human or else must come pretty close to
> Total-Turing equivalence.

I think that counterexamples have already been given above that refute
this line of reasoning.  And even embedding the testee in an
"ridiculously, implausibly, but not impossibly detailed simulation"
wouldn't mean that the testee was "far more intelligent than any
human".  So I can't agree with this position.

> gomez@barros.cs.ucf.edu (Fernando Gomez)
> Message-ID: <1992Jun9.051649.9894@cs.ucf.edu>
> [...] the emboddied robot relies upon its embodied functions to do
> problem solving, and this aspect is not part of the learned knowledge.
> To be specific, consider my example about coducting a conversation
> about an embodied eating-robot and a disembodied robot, and the
> question:  Does the teeth get busier in eating an apple than in eating
> a jello?  If the eating-robot knows nothing about jello, still it can
> find out the answer to this question by eating a jello.

True, but I think it's irrelevant.  The embodied robot can't answer
the question either, unless it has a jello handy.  Thus, a legitimate
answer for the non-embodied TT testee to give is "Gee, I've never
thought about that, and I don't have a jello handy, so I can't
answer your question.".

And even if (though it would be a violation of the TT) the judge
says "I'll smuggle you in a jello, I know one of the moderators
of this test... you eat it and tell me what you think", the
disembodied testee could send the jello out to it's embodied
sibling, and get back a database update, and then answer the
question (assuming the corrupted moderator doesn't tell...).
And it would answer with as much internal knowledge of how that
jello (or ice cream, or whatever) tasted as it's embodied sibling
and co-conspirator.

( Of course the question arises as to whether the transfer on
  the futuristic equivalent of diskettes is sufficient to ground
  the disembodied testee's symbolic usage.  I think it is,
  again based on "capability, not history, is what matters". )

Now again, I *realize* this is making a big jump, in that it is
assumed that it is *possible* to separate the intelligent-part
from the sensorimotor capabilities.  But simply asserting that
it *can't* be done is begging the question.

And further, a typical computer HAS sensorimotor capabilities...  they
just aren't the ones that humans tend to have.  Even that isn't all too
universally true, as many computers now come with microphones and
speakers by default, and there is every expectation of high resolution
digital cameras soon. (They already typically come with
high-resolution video OUTput.)  Thus the overlap with human senses
and capabilities can be expected to grow over time, simply due to
trends already apparent.

Some may object that this is just turning the computer INTO a robot,
but I think it only reinforces the point that computers ARE robots...
with a non-humaniform set of sensors and effectors.

And finally, I don't see that the ability to answer whether "the teeth
get busier in eating an apple than in eating a jello" from direct
experience (or at all) is relevant to whether an entity is intelligent
or not.  For example, humans without teeth cannot answer this from
direct experience, and yet may be able to carry on very intelligent
conversations, though perhaps having problems pronouncing the dentally
produced phonemes that are in the language.

Wayne Throop       ...!mcnc!aurgate!throop


