From newshub.ccs.yorku.ca!ists!torn!utcsri!rpi!zaphod.mps.ohio-state.edu!sdd.hp.com!elroy.jpl.nasa.gov!decwrl!mcnc!aurs01!throop Tue Jun 23 13:21:08 EDT 1992
Article 6304 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!torn!utcsri!rpi!zaphod.mps.ohio-state.edu!sdd.hp.com!elroy.jpl.nasa.gov!decwrl!mcnc!aurs01!throop
>From: throop@aurs01.UUCP (Wayne Throop)
Newsgroups: comp.ai.philosophy
Subject: Re: 5-step program to AI
Message-ID: <60842@aurs01.UUCP>
Date: 18 Jun 92 15:10:14 GMT
References: <60835@aurs01.UUCP> <1992Jun17.181322.7736@mp.cs.niu.edu> <60840@aurs01.UUCP> <1992Jun18.022002.29912@mp.cs.niu.edu>
Sender: news@aurs01.UUCP
Lines: 80

> rickert@mp.cs.niu.edu (Neil Rickert)
> The computer symbols you are thinking of are those used within the
> computer, or between the computer and nearby peripherals. 

Well... no, actually.  I was thinking about (say) strings of ascii
characters that are emitted from computer peripherals for humans to
interpret.  I'm pretty sure we were talking about the same thing.

> The central property of digital communications is that a signal can
> be significantly degraded, but as long as it is not degraded past the
> point where it can be recognized, it can be accurately recreated.  It
> is this resistance to signal degradation that is so important.

Yes, but that wasn't what I was getting at.  The notion I was
responding to was that since the difference between humans and other
mammals is use of abstract symbols, and since computers already use
abstract symbols, that therefore going from  general-mammal-level of
capability to human-level of capability would be easy because it's
essentially already there.

I think that may well be wrong, because "use abstract symbols" may
mean very different things when applied to humans as when applied
to computers.  In fact, I tend to think that the two *are* very
different things, and it *won't* be easy to go from general-mammal
to human capabilities.

> This brings us right back to the importance of pattern recognition.  The
> computer chess program and the human chess player both proceed in a
> somewhat similar manner.  They construct possible sequences of continuation
> moves and evaluate the result. 

I think this is incorrect.  The evidence I've seen is that computers
generate zillions of continuations and evaluate each result.  Humans,
on the other hand, generate a small handfull of continuation moves and
this set *happens* *to* *contain* the "right" moves, and then spend
most of their time deciding between them by detailed comparision (not
by pattern recognition).   In some as-yet-unknown way, the good moves
were constructed, rather than being the recognized among the bad
moves.

Human pattern-recognition cycle times are tens, maybe hundreds of
trials per second.  There just isn't time to fit tree searches of the
depths humans actually do into the time they actually take, by
recognizing "right" moves among the zillions possible.  Thus, pattern
recognition in the usual sense can't really be involved here either
(though it may be related in some way, eg: pattern recognition schemata
used in some sort of generative process).

It is still possible that some parallelism is being exploited, and
humans consider all-moves-at-once and recognize where the good ones are
in some "move-space".  But even here, at the lookahead levels humans
use, the move-space is too large to be processed in parallel, being (I
think... sanity check me here) many, many times as large as the visual
field, and with more complex elements to boot.  Further, IF humans were
doing that, you'd expect them to do splendidly at end-games (would you
not?)... and they do not. Humans are strongest in the mid-game (if I'm
remembering correctly).

Anyway, labling what's going on in the human case as anything like
"pattern recognition" is very, very premature, at best.

> The computer evaluation is essentially syntactic on the symbols, 
> while the human player looks at the information represented by the symbols
> (the game position) and uses his pattern recognition skills to evaluate the
> result.  But the difference seems to be largely in the great ability of
> humans at pattern recognition and learning.

Well, in addition to disagreeing that the important difference lies in
the evaluation strategy at all, I also think it is a mistake to
characterize the computer's evaluation strategy as "syntactic on the
symbols".  Positional elements DO play a role in computer evaluations.

Now (oversimplfying, granted) syntax can give "legal" "not-legal"
indications to game positions.  But giving a "goodness" score, whether
linear or multidimensional, requires more than that.  In fact, it
requires what in computer circles is termed "semantics" (though I agree
that this semantics is "merely" (and again oversimplifying) syntax on a
different, lower level).

Wayne Throop       ...!mcnc!aurgate!throop


