From newshub.ccs.yorku.ca!ists!torn.onet.on.ca!utgpu!news-server.csri.toronto.edu!rpi!usenet.coe.montana.edu!decwrl!mcnc!aurs01!throop Mon Jun 15 16:04:42 EDT 1992
Article 6210 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!torn.onet.on.ca!utgpu!news-server.csri.toronto.edu!rpi!usenet.coe.montana.edu!decwrl!mcnc!aurs01!throop
>From: throop@aurs01.UUCP (Wayne Throop)
Newsgroups: comp.ai.philosophy
Subject: Re: Transducers
Message-ID: <60806@aurs01.UUCP>
Date: 11 Jun 92 17:27:04 GMT
References: <BILL.92Jun10174436@ca3.nsma.arizona.edu>
Sender: news@aurs01.UUCP
Lines: 62

> bill@nsma.arizona.edu (Bill Skaggs)
> Time to recapitulate.  I'm willing to give credit for "thinking" to
> any machine that can carry on an intelligent general-purpose
> conversation in a natural language like English, and this I think was
> the criterion Turing was aiming at.  But thus stated it is a vague
> criterion:  what is an "intelligent general-purpose conversation"?
> The Imitation Game was an attempt to make the criterion precise, but
> for practical purposes the Game doesn't work very well, because it can
> easily be won by the human contestant using the sort of trickery I
> have described.

Agreed.  (Lest somebody be mislead by the fact that I have said
that the TT is well specified, I mean only in relation to the TTT.
That is, near as I can tell, the TT is better specified than the TTT.)

> I also believe that, as a practical matter, it will be extremely
> difficult to teach a machine to converse intelligently without giving
> it rich sensory inputs and the ability to manipulate physical objects.
> (I take no credit for this view: Turing said the same thing in 1948,
> and many others have repeated it since.)  Note that this is only an
> opinion, and one that many very clever AI workers would disagree with.
> Doug Lenat, for example (but there are many others), has for years
> been aiming at building intelligence by feeding the entire contents of
> an encyclopedia to a computer.

And here too, I'd side with Bill.  We may well differ in what we
consider "manipulating objects" and "rich sensory input", but I think
it unlikely that simply "feeding the entire contents of an encyclopedia
to a computer" will produce AI, no matter how sophisticated the
cross-referencing and indexing.

Further, even Turing's approach of "giving a computer the best sensors
money can buy and having it learn" (quote from memory) is essentially
impossible, because we don't know how humans learn.  (We know quite a
bit about ways to try to get computers to learn that don't work well
enough...)

>From my own perspective, "AI" is currently able to produce things with
"intelligence" somewhere between that of frogs and birds.  That is,
something with some sophisticated and effective pattern recognition,
and some sophisticated and (somewhat less) effective motor skills, and
a set of triggers of sets of motor activities based on patterns
recognized.  This applies equally to robotic research (optical pattern
recognition, locomotor skills in mono/bi/quadru-ped robots, etc) and
computer efforts (natural language parsing/translating/indexing
schemes, expert systems, etc).  The best anybody's been able to do so
far is akin to a frog  flicking an insect out of the air, or a bird
rolling an egg back onto the next.  The frog doesn't "know" diddly
about insects, nor even the bird about eggs, in pursuing these
behaviors.  Even a respectable (say) cat-level of intelligence is (I
think) beyond us now.

At least one basic breakthrough will (in my opinion) occur before AI is
possible (if it ever is).  I very much doubt that it's simply a matter
of scaling up the techniques currently understood.  Scaling up
currently known techniques could doubtless give interesting and useful
results, but not, I think, anything I'd call true AI.  (I may be
wrong of course.... a very, very, very large and fast neural net,
or expert system, or hybrid system, or whatever, could be "all" it 
takes.  I just don't currently think so.)

Wayne Throop       ...!mcnc!aurgate!throop


