From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!news.cs.indiana.edu!mips!pacbell.com!att!att!fang!tarpit!cs.ucf.edu!news Thu Apr 16 11:33:21 EDT 1992
Article 4976 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!news.cs.indiana.edu!mips!pacbell.com!att!att!fang!tarpit!cs.ucf.edu!news
>From: clarke@acme.ucf.edu (Thomas Clarke)
Subject: Re: intelligent organic goop
Message-ID: <1992Apr6.145603.22619@cs.ucf.edu>
Sender: news@cs.ucf.edu (News system)
Organization: University of Central Florida
References: <505@tdatirv.UUCP>
Date: Mon, 6 Apr 1992 14:56:03 GMT

In article <505@tdatirv.UUCP> sarima@tdatirv.UUCP (Stanley Friesen) writes:
| In article <1992Apr01.210117.26523@spss.com> markrose@spss.com (Mark  
Rosenfelder) writes:
| |>  Were the AI implemented entirely in  
| |>software, then the heap or the device drivers would be unidentifiable.
| |>
| |>If such parts are identifiable, then I suspect, the system could not be  
visibly  
| |>intelligent.  
| |
| |It sounds like you're saying that you can't call a system intelligent if you
| |can figure out how it works.  Sounds barmy to me.  If we knew exactly how
| |the brain worked, would be cease to be intelligent?
| 
| He is wrong in another respect also.  Even at our current, rather minimal,
| level of knowledge about the brain and the mind we have found identifiable
| parts thereof.  And the more we know about the brain and mind, the more
| identifiable parts we find.

Consider weather as a rough analogy.  We know how it works (Navier Stokes  
equations + thermodynamics + boundary conditions of solar input ...).  Fifteen  
or so years ago the phenomena of chaos became widely known.  When applied to  
weather the upshot seems to be that conditions cannot be predicted more than  
about two weeks in advance.  The exponential growth characteristic of chaotic  
dynamics amplifies the butterfly's ruffle into significant weather changes.   
Long term regularities can still emerge - the geometry of the attractor - in  
the form of seasonal and climactic patterns, but week to week weather is  
unpredictable.  Actually, to be precise, the weather is predictable in  
principle, but not in practice in the physical world.

In an AI, the next second's "thought" might be reasonably predictable given  
knowledge of the current "thought"-state of the AI [Here I have to weasel-word  
about whether the AI is hardware goop or extremely complicated software]. I  
argue that the thought five minutes or an hour hence is inherently  
unpredictable just as weather a fortnight hence is.  In contrast, the overall  
shape of the AI's "personality" may be understandable just as climactic  
patterns are.

If you could look at the code (or circuit for hardware) of the AI and starting  
with the current thought after only a little work arrive at a prediction or  
calculation of the next thought, would this not trivialize the thought?  You  
would be in the position of Searle with his Chinese room.  Where's the  
intelligence?  You would look at your yellow-pads of state transition diagrams,  
all separately comprehensible and with an easy to see overall pattern, and  
conclude that the AI you had hand simulated was not so intelligent after all.   
The most clear example of this is the humongous lookup table; everyone seems to  
agree that it will act intelligent, but most think it trivial.

If the AI's thought emerges in a very unobvious way, then you will be impressed  
by the AI and think it quite a clever machine.  The complexity of the software  
(or hardware) must defeat any efforts you would make to control the detailed  
direction of its thought patterns.  Your efforts would be to the intelligence  
as the butterfly's ruffle is to the weather.  Your efforts would indeed  
deterministically influence the machine's thought, but in a practically  
unpredictable way.

 



