From newshub.ccs.yorku.ca!ists!torn!utcsri!rutgers!att!linac!mp.cs.niu.edu!rickert Tue Jun 23 13:21:30 EDT 1992
Article 6340 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!torn!utcsri!rutgers!att!linac!mp.cs.niu.edu!rickert
>From: rickert@mp.cs.niu.edu (Neil Rickert)
Newsgroups: comp.ai.philosophy
Subject: Re: 5-step program to AI
Message-ID: <1992Jun21.172732.10775@mp.cs.niu.edu>
Date: 21 Jun 92 17:27:32 GMT
References: <4135@rosie.NeXT.COM> <1992Jun20.022757.31828@mp.cs.niu.edu> <5027@sheol.UUCP>
Organization: Northern Illinois University
Lines: 109

In article <5027@sheol.UUCP> throopw@sheol.UUCP (Wayne Throop) writes:
>> rickert@mp.cs.niu.edu (Neil Rickert)
>> Message-ID: <1992Jun20.022757.31828@mp.cs.niu.edu>
>> Your meaning of pattern recognition is evidently quite different from
>> mine.  I am not referring to well defined patterns that can be checked
>> in a point by point comparison.  Anything that is recurrent is some form
>> is a pattern, and the brain is remarkably good at discovering these
>> recurrent patterns (in some kind of learning process) and recognizing
>> them when they occur again.  And I don't mean conscious discovery and
>> conscious recognition.  Much of this occurs at unconscious levels.
>
>But look at what was said here. "Recognizing them when they occur again".
>They have to *occur* in some sense before they can be *recognized*.

  Absolutely.  Agreement in one point.  (Regrettably that may be our only
point of agreement).

>What I'm getting at is that humans are NOT *recognizing* patterns when
>they play chess, or do mathematics, or compose music, or speak sensible
>language, or paint pictures, or draw scientific hypotheses.  They are
>*creating* patterns, *generating* them.  (We'll ignore Socrates and

  When a mathematician proves an important theorem, he is indeed
creating patterns.  For this theorem will be printed in journals and
textbooks, and written on blackboards.  He is creating a pattern of
printed words.  But this is a rather uninteresting sense of pattern
creation, and I doubt that it is what you meant.

  When a mathematician comes up with a new result, where do you think it
comes from?  Does he just wave a magic wand and *poof* suddenly there it
is?  No, the mathematics comes from something he has recognized, perhaps
an unanticipated association between two different mathematical objects.
His theorem documents the pattern he has recognized, and demonstrates that
it is real and not merely imagined.  Most mathematicians prefer to call
their work a "discovery" rather than an "invention" because they are aware
that they have recognized patterns that were already in existence.

>It's like looking at mug shots.  You have a mug-shot book of a zillion
>felons.  A human *could* (given time) look at all possible felons and
>recognize the miscreant.  That's how computers look up fingerprints.

  Let me repeat.  I am *not* referring to well defined patterns that can
be checked in a point by point comparison.  I refer to that as "pattern
matching".  The computer check of fingerprints is pattern matching.  I
am talking about pattern recognition, and pattern matching is only one
way - usually the wrong way - of doing pattern recognition.

  Let me give an example.  Suppose I want to write a spell checking program.
One way of doing it would be to use pattern matching.  I could read in the
word, then compare it with 100,000 words of a dictionary to see if it
matched.  This is somewhat expensive, but if binary search methods are used
the cost can be tolerable.

  OR  I can create a finite state machine which accepts just the words of
the dictionary.  As I read in the word, I run it through the finite state
machine.  If the word is 7 letters long, it takes only 7 steps to check the
spelling.  I recognize it, but I am not doing any pattern matching.

  Much of our thinking about human intelligence is muddied by our building
of models which look too much like the way we would do things in a
computer.  Thus we too readily think of pattern matching as if it were the
only way to do recognition.  This leads to many fallacious ideas.  It
leads to people counting how many steps the pattern matching would take
in a computer, multiplying this by the reaction time of neurons, and
attempting to deduce the degree of parallelism in the brain.  But it is
all based on a probably false model.  The way we do things in the computer
is far to organized to be able to have evolved incrementally in a
biological system.  However something like a finite state machine, but
cruder, could easily have evolved, with evolution splitting states and
increasing the complexity of the type of state transitions.

>                                                                      A
>police artist, on the other hand, *creates* something that looks like
>the felon the computer might search for, based on minimal cues.

  Imagine you have something with some of the characteristics of the
finite state machine.  Specifically, something that can recognize
patterns, but does not use pattern matching, and does not have a library
of known patterns.  The ability to recognize is a memory of sorts.  But
it is not a memory you can directly read from.  How do you recover
information from such a memory?  What you do is create something and see
if the recognition machine can recognize it.  Then you keep modifying
your creation with the goal of improving the quality of recognition.
When you have finished, you can hope that you have produced a good
likeness.

 This is how you must recover information from a memory whose only ability
is the ability to recognize.  This is what the police artist does.  It is
also what you do when you are trying to remember something.  Suppose you
are unsure of the spelling of a word.  You might talk about searching your
memory, but in reality you try out a few guesses, and write them down either
on paper on in your imagination, and you see which guess you best recognize.

>                                                                 I'm
>saying that the human chess player doesn't have the time to look at the
>mug shots of all the reachable positions.

  You are arguing with the fallacious reasoning I have just described,
based on the wrong assumptions about how pattern recognition occurs.

>Humans seem to do it the "wrong" way 'round.  They decide on a
>chess position, or sentence in a formal system, that suits them from
>a "goodness" standpoint, and then "work backwards" to show that it's
>reachable, or provable, or whatever.

  If you had a computer which could do recognitions, but held no stored
images, you would have to program it to simulate positions and attempt
to recognize them.  You would find it you were programming it in much
the same "wrong way" approach.


