From newshub.ccs.yorku.ca!ists!torn.onet.on.ca!utgpu!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!pacific.mps.ohio-state.edu!linac!mp.cs.niu.edu!rickert Tue Jun 23 13:20:51 EDT 1992
Article 6274 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!torn.onet.on.ca!utgpu!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!pacific.mps.ohio-state.edu!linac!mp.cs.niu.edu!rickert
>From: rickert@mp.cs.niu.edu (Neil Rickert)
Subject: Re: 5-step program to AI
Message-ID: <1992Jun16.213227.31307@mp.cs.niu.edu>
Organization: Northern Illinois University
References: <1992Jun12.192537.32302@mp.cs.niu.edu> <60831@aurs01.UUCP>
Date: Tue, 16 Jun 1992 21:32:27 GMT
Lines: 90

In article <60831@aurs01.UUCP> throop@aurs01.UUCP (Wayne Throop) writes:
>> rickert@mp.cs.niu.edu (Neil Rickert)
>>   0:	A rock.  Not much here.
>>   1:	A single celled creature (a protozoan for example).  Not bad
>> 	considering that it has only one cell.
>>   2:	A frog.  A big jump.
>>   3:	A mouse.  Many capabilities of mammals which are not seen in
>> 	reptiles and amphibians.  Evidence of consciousness is
>> 	rather more persuasive for mammals than for a frog.
>>   4:	A chimpanzee.  Very intelligent compared with most mammals,
>> 	but lacking language, and presumably well short of human
>> 	intelligence.
>>   5:	Human intelligence.
>> Now here is where my gross over-simplification comes in.  I would
>> characterize what we can currently achieve with AI as being the
>> incremental intelligence to get from step 4 to step 5.  Where AI
>
>Yes, that's a difference between us alright.  I'd say that what we can
>currently achieve with AI is 0 through 2, and that there's something
>missing beyond that.  Call it "grounding" or "intentionality" or
>whatnot, but... something missing.

  I prefer to avoid terms like "grounding" and "intentionality" since
they are poorly defined.  That is to say, if you claimed to have
implemented "intentionality" in a computer there would be much argument
as to how to test it.  However if you claimed to implement pattern
recognition and learning it would be much easier to know how to test
whether your implementation was up to snuff.

  I would argue that a frog already has adequate "grounding", although
I won't claim "intentionality" since that is too tied up with a poorly
defined "consciousness".  If Pavlov's dog can salivate at the ring of
a bell, I would consider that strong evidence that the bell sound is a
well grounded symbol for that dog.  I think you can get good evidence
of grounding for lab rats from various maze learning experiments.  I
suspect you could establish the same sort of grounding, albeit weaker,
in frogs.

>I think that the apparent facility with "higher reasoning" that
>computers currently have IS essentially fiddling with "meaningless
>squiggles and squoggles", and any meaning is only read in by humans
>(or at least, to a very large degree).

  This is why I consider computer intelligence to be approximately the
increment from the chimpanzee to the human.  The most obvious difference
between the chimp and humans is language, and the resultant large
extensible set of symbols.  Computers clearly provide the symbols.  It
is true that the computer symbols may be largely "meaningless squiggles
and squoggles" but I believe that the ability to associate semantics
with symbols is already present in the chimpanzee.  What the chimp lacks
is a large extensible symbol set.

>> Once we understand rapid pattern recognition and learning, I think we
>> will be ready to make dramatic advances.
>
>I agree, with two reservations: I think the problem of pattern
>recognition is starting to yield, while the problem of learning isn't
>yet (or at least not flexibly and open-ended-ly enough.

  I'm not so sure that pattern recognition is starting to yield.
Recognition of predetermined patterns won't do the job.  We need a
pattern recognition system which is associated with the learning, and
can recognize newly learnt patterns.

>                                                         And I think
>there may well be something still more involved.

  Well, yes.  Thought is also required.  It is presumably present already
in chimpanzees.  If frogs are capable of thought, it is only thought of
a most rudimentary kind.  Personally I'm not too concerned about thought.
I don't see it as difficult to implement once the pattern recognition
and learning is in place.  In some ways our computer based AI goes well
beyond the increment from chimpanzee to human, and I believe there is
enough there to accomodate thought.

>> I see in the frog at least the beginnings of what is missing.

>I agree that frogs have the starts of recognition and learning, but
>I don't see that they really do it *that* much "better" than current
>computers.

  If we could do recognition and learning in the same way as frogs, we
would probably only have to increase the resolution to reach the level
needed for humans.  We would already have the algorithms, and only need
more chips and higher resolution sensory peripherals.

>            But hey, I'm just talking about vague gut feel here.

  So am I.  But what the heck.



