From newshub.ccs.yorku.ca!ists!torn!utcsri!rpi!usc!cs.utexas.edu!uunet!psinntp!dg-rtp!sheol!throopw Tue Jun 23 13:21:27 EDT 1992
Article 6335 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!torn!utcsri!rpi!usc!cs.utexas.edu!uunet!psinntp!dg-rtp!sheol!throopw
>From: throopw@sheol.UUCP (Wayne Throop)
Newsgroups: comp.ai.philosophy
Subject: Re: 5-step program to AI
Keywords: chess
Message-ID: <5027@sheol.UUCP>
Date: 21 Jun 92 04:48:03 GMT
References: <1992Jun20.003223.963@CSD-NewsHost.Stanford.EDU> <4135@rosie.NeXT.COM> <1992Jun20.022757.31828@mp.cs.niu.edu>
Lines: 114

> costello@CS.Stanford.EDU (T Costello)
> Message-ID: <1992Jun20.003223.963@CSD-NewsHost.Stanford.EDU>
>|> To the contrary, human master players evaluate far *DEEPER* than do
>|> computer players (at least, in the mid-game).  
>   Tal was once asked how many moves he looked ahead, he answered "Only
>   one, the right one".

Well, that makes for a good "line", but I'm not sure it's really a good
overview of the process of human chess play.  Especially, if probed for
details on how he decides which move is the "right one". 

> I have been looking at heuristic driven reasoning, and I find that
> good examples are difficult to find.  I would be very grateful
> if anyone could explain what they feel the methods or ther underlying
> methodology of the human human choice of goals to search for is.

What I laid out was from magazine articles, based in turn on interviews
with the workers who created the best computer chess programs of a year
or so ago, and with the human chess players that colaborated with them.
The material on human strategies was based on interviews with human
players, and involved such things as probing what memory strategies
chess masters used to remember board positions by categorizing the types
of errors they tend to make (on the rare ocasions when they made any...). 

Unfortunately, I don't have references to those magazine articles to
hand.  Looking into Scientific American and Science News for the last
year or two might find at least some of them (but not all, since I know
some of my reading was from other sources that I can't recall now.)

> paulking@neuron.next.com (Paul King)
> Message-ID: <4135@rosie.NeXT.COM>
> The human, I speculate, perceives "moves" and "board positions" at
> many levels.  At the lowest level, a move is the placement of
> a piece in a board full of pieces.  But at a higher level, the
> human perceives imagined structures, such as controlled diagonals,
> tripley-gaurded squares, and pieces under attack.  At a
> still higher level, the human perceives sturdy defense structures,
> areas of vulerability, risky pieces, and fork opportunities.

Well, computer programs are also aware of controlled diagonals,
multiply attacked squares, pins, forks, positional advantage, and
all manner of "high level" ways of looking at the board.  Computer
chess programs are far, far beyond their origins of examining positions
purely by "head count": queen is 8 points, rook is 6, bishop 5, etc.

No, the difference between computers and humans is much more subtle than
that.  Perhaps part of the difference is that humans approach this
high-level view "from above", synthetically, while computers approach it
from below, analytically.  At least, that's the difference I see in it. 
But just what it might mean to "approach something from above,
synthetically" is vague. 

> rickert@mp.cs.niu.edu (Neil Rickert)
> Message-ID: <1992Jun20.022757.31828@mp.cs.niu.edu>
> Your meaning of pattern recognition is evidently quite different from
> mine.  I am not referring to well defined patterns that can be checked
> in a point by point comparison.  Anything that is recurrent is some form
> is a pattern, and the brain is remarkably good at discovering these
> recurrent patterns (in some kind of learning process) and recognizing
> them when they occur again.  And I don't mean conscious discovery and
> conscious recognition.  Much of this occurs at unconscious levels.

But look at what was said here. "Recognizing them when they occur again".
They have to *occur* in some sense before they can be *recognized*.

What I'm getting at is that humans are NOT *recognizing* patterns when
they play chess, or do mathematics, or compose music, or speak sensible
language, or paint pictures, or draw scientific hypotheses.  They are
*creating* patterns, *generating* them.  (We'll ignore Socrates and
Plato for now, and presume that it's possible to create and originate,
and not just remember and recognize...)

It's like looking at mug shots.  You have a mug-shot book of a zillion
felons.  A human *could* (given time) look at all possible felons and
recognize the miscreant.  That's how computers look up fingerprints.  A
police artist, on the other hand, *creates* something that looks like
the felon the computer might search for, based on minimal cues.  I'm
saying that the human chess player doesn't have the time to look at the
mug shots of all the reachable positions.  The human instead "sketches
out" a position that has an uncanny resemblance to one of the reachable
positions.  Uncanny, because for the chess player, there's no actual
felon-analog to remember and guide the sketch.  (I'm not saying that
this has a *deep* connection to chess, mathematics, and the like.  Just
an illustrative metaphor.  At least I think that's all it is...)

So, the point is to hit upon something that both satisfies some
complicated syntactic requirement (is reachable in the current
chess game, or is a theorem of some formal system, or whatever), and
at the same time has some interesting property (tactical advantage,
interesting mathematically, or whatever).  It's obvious how computers
can do this.  They only consider syntactically correct positions,
and then grade them according to some "goodness" scale.

Humans seem to do it the "wrong" way 'round.  They decide on a
chess position, or sentence in a formal system, that suits them from
a "goodness" standpoint, and then "work backwards" to show that it's
reachable, or provable, or whatever.

( Genetic and annealing approaches apply where related "syntax-valid"
  positions have a "goodness gradient", so to speak.  Chess, mathematics,
  and other such things are too chaotic for that.  Perhaps humans can
  see a "goodness gradient" in these cases that isn't apparent to
  them consciously.  That is, perhaps humans have some way of
  navigating "towards good" while preserving reachable-position-ness,
  which is essentially how genetic and annealing approaches can 
  get away with "looking at goodness first". )

In any event, I'd still say that "pattern recognition" isn't the right
term for it.  "Recognition" implies going from the concrete to the
abstract.  Going from the abstract to the concrete is what we're
discussing, I think, and that's something more like "pattern generation"
or "pattern creation". 
--
Wayne Throop  ...!mcnc!dg-rtp!sheol!throopw


