From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!olivea!spool.mu.edu!uwm.edu!linac!uchinews!spssig.spss.com!markrose Thu Apr 16 11:33:33 EDT 1992
Article 4998 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!olivea!spool.mu.edu!uwm.edu!linac!uchinews!spssig.spss.com!markrose
>From: markrose@spss.com (Mark Rosenfelder)
Newsgroups: comp.ai.philosophy
Subject: Re: intelligent organic goop
Message-ID: <1992Apr08.213706.30933@spss.com>
Date: 8 Apr 92 21:37:06 GMT
References: <505@tdatirv.UUCP> <1992Apr6.145603.22619@cs.ucf.edu>
Organization: SPSS Inc.
Lines: 31
Nntp-Posting-Host: spssrs7.spss.com

In article <1992Apr6.145603.22619@cs.ucf.edu> clarke@acme.ucf.edu (Thomas Clarke) writes:
>Consider weather as a rough analogy.  We know how it works (Navier Stokes  
>equations + thermodynamics + boundary conditions of solar input...).  Fifteen  
>or so years ago the phenomena of chaos became widely known.  When applied to  
>weather the upshot seems to be that conditions cannot be predicted more than  
>about two weeks in advance.  [...] Actually, to be precise, the weather is  
>predictable in principle, but not in practice in the physical world.
>
>In an AI, the next second's "thought" might be reasonably predictable given  
>knowledge of the current "thought"-state of the AI [Here I have to weasel-word 
>about whether the AI is hardware goop or extremely complicated software]. I  
>argue that the thought five minutes or an hour hence is inherently  
>unpredictable just as weather a fortnight hence is.  In contrast, the overall  
>shape of the AI's "personality" may be understandable just as climactic  
>patterns are.

This seems to me to be different (and more reasonable) than your earlier
comments about parts of the system being "unidentifiable."  It's not hard
to believe that the behavior of an AI might be unpredictable in practice
though not in theory.

>If you could look at the code (or circuit for hardware) of the AI and starting 
>with the current thought after only a little work arrive at a prediction or  
>calculation of the next thought, would this not trivialize the thought?  You  
>would be in the position of Searle with his Chinese room.  Where's the  
>intelligence? You would look at your yellow-pads of state transition diagrams, 
>all separately comprehensible and with an easy to see overall pattern, and  
>conclude that the AI you had hand simulated was not so intelligent after all.   
It sounds like you just want the program to be very complex before you call
it intelligent.  I suspect Searle would say that it doesn't matter how 
complex it is-- it's still unintelligent, to him.


