From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!convex!constellation!uokmax!occrsh!fang!tarpit!cs.ucf.edu!news Thu Apr 16 11:34:04 EDT 1992
Article 5050 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!convex!constellation!uokmax!occrsh!fang!tarpit!cs.ucf.edu!news
>From: clarke@acme.ucf.edu (Thomas Clarke)
Subject: Re: intelligent organic goop
Message-ID: <1992Apr10.142051.28174@cs.ucf.edu>
Sender: news@cs.ucf.edu (News system)
Organization: University of Central Florida
References: <1992Apr08.213706.30933@spss.com>
Date: Fri, 10 Apr 1992 14:20:51 GMT
Lines: 45

In article <1992Apr08.213706.30933@spss.com> markrose@spss.com (Mark  
Rosenfelder) writes:
| In article <1992Apr6.145603.22619@cs.ucf.edu> clarke@acme.ucf.edu (Thomas  
Clarke) writes:
| >Consider weather as a rough analogy.  We know how it works  .. but ...
| > the weather is  
| >predictable in principle, but not in practice in the physical world.

| This seems to me to be different (and more reasonable) than your earlier
| comments about parts of the system being "unidentifiable."  It's not hard
| to believe that the behavior of an AI might be unpredictable in practice
| though not in theory.

Cognitive psychologists have discovered that human I's have a short term  
"stack" that can hold about seven items.  Neurophysiologists have not found a  
structure corresponding to this stack.  I doubt if they ever will in more than  
a very general sense.  No surgeon will ever be able to excise 2 cc of tissue in  
order to reduce the stack depth by one.  A neural net AI might have an  
"awareness buffer" that emerges from the net dynamics rather like eignenvalues  
from a differential operator so that the "buffer" is not localized and subject  
to easily understood modification.

| >If you could look at the code (or circuit for hardware) of the AI and  
starting 
| >with the current thought after only a little work arrive at a prediction or  
| >calculation of the next thought, would this not trivialize the thought?  You  
| >would be in the position of Searle with his Chinese room.  Where's the  
| >intelligence? 

| It sounds like you just want the program to be very complex before you call
| it intelligent.  I suspect Searle would say that it doesn't matter how 
| complex it is-- it's still unintelligent, to him.

I didn't say I agreed whole heartedly with Searle, but I do have some feeling  
for his position.  I conjecture that it can be demonstrated that intelligence  
can only arise in sufficiently complex systems where the proper definition of  
complexity is relative to the observing (Turing test conductor) intelligence.  

Incidentally, Putnam (Representation and Reality) speculates that the  
equivalence relation "between the structures of all physically possible systems  
(organisms cum environments) which contain a physically possible organism who  
entertains a particular belief (is) ... undiscoverable by physically possible  
intelligent beings."  [see the Rock FSA thread]  This is stronger than arguing  
that there is a threshold complexity for intelligence, but perhaps rigorous  
demonstration requires going all the way to "undiscoverability". 


