From newshub.ccs.yorku.ca!ists!torn.onet.on.ca!utgpu!utcsri!rpi!zaphod.mps.ohio-state.edu!pacific.mps.ohio-state.edu!linac!mp.cs.niu.edu!rickert Tue Jun 23 13:21:07 EDT 1992
Article 6302 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!torn.onet.on.ca!utgpu!utcsri!rpi!zaphod.mps.ohio-state.edu!pacific.mps.ohio-state.edu!linac!mp.cs.niu.edu!rickert
>From: rickert@mp.cs.niu.edu (Neil Rickert)
Subject: Re: 5-step program to AI
Message-ID: <1992Jun18.022002.29912@mp.cs.niu.edu>
Organization: Northern Illinois University
References: <60835@aurs01.UUCP> <1992Jun17.181322.7736@mp.cs.niu.edu> <60840@aurs01.UUCP>
Date: Thu, 18 Jun 1992 02:20:02 GMT
Lines: 59

In article <60840@aurs01.UUCP> throop@aurs01.UUCP (Wayne Throop) writes:
>> rickert@mp.cs.niu.edu (Neil Rickert)
>> In computers we string together sequences of atomic items from a discrete
>> set, typically either ASCII characters or binary digits (depending on what
>> level you want to view it).  Human string together sequences of phonemes
>> from their language, which are also atomic items from a discrete set.  Is
>> there all that much difference?
>
>I think there is, primarily in the mechanisms which form the
>composition rules, and hence in the classes of sequences that tend to
>get put together.

 When you look at the composition rules, you are probably making the
wrong comparisons.  It is true that in computers we can build "sentences"
every which way, while linguistic symbols and sentences have a great
deal of constraint (the regularities of language).

 But the comparison is unfair.

 The computer symbols you are thinking of are those used within the
computer, or between the computer and nearby peripherals.  The computer
bus, the SCSI bus, etc, are high quality communication channels.  Errors
are infrequent.

 Try comparing with computer signals sent over noisy
communication channels.  Think of the digital pictures sent back from
Jupiter and Saturn by space shots.  Here you find there are severly
constrained composition rules.  A great deal of redundancy is built
into the signals in the form of error correcting codes.

 Now look at language as a digital signal sent over a noisy channel.
The regularities of language suddenly look like the redundant information
in error correcting codes, except that they are not as mathematically
formalized as our digital correcting codes.

 The central property of digital communications is that a signal can
be significantly degraded, but as long as it is not degraded past the
point where it can be recognized, it can be accurately recreated.  It
is this resistance to signal degradation that is so important.  And
language has the same property.  Once information is linguistically
encoded, it becomes much more resistant to degradation.

>The more important difference is the mechanisms by which current
>computers and humans employ and manipulate abstractions seem likely to
>be very different indeed.  As different as the differences between the
>mechanisms behind Deep Thought or Belle's sequences of chess moves, and
>a human's sequences of chess moves, even if they are similar
>sequences.

 This brings us right back to the importance of pattern recognition.  The
computer chess program and the human chess player both proceed in a
somewhat similar manner.  They construct possible sequences of continuation
moves and evaluate the result.  But there the similarity ends.  The
computer evaluation is essentially syntactic on the symbols, while the
human player looks at the information represented by the symbols (the
game position) and uses his pattern recognition skills to evaluate the
result.  But the difference seems to be largely in the great ability of
humans at pattern recognition and learning.



