From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!bonnie.concordia.ca!thunder.mcrcim.mcgill.edu!snorkelwacker.mit.edu!mojo.eng.umd.edu!darwin.sura.net!jvnc.net!yale.edu!qt.cs.utexas.ed Tue Apr  7 23:23:42 EDT 1992
Article 4870 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!bonnie.concordia.ca!thunder.mcrcim.mcgill.edu!snorkelwacker.mit.edu!mojo.eng.umd.edu!darwin.sura.net!jvnc.net!yale.edu!qt.cs.utexas.ed
u!zaphod.mps.ohio-state.edu!uwm.edu!linac!uchinews!spssig.spss.com!markrose
>From: markrose@spss.com (Mark Rosenfelder)
Newsgroups: comp.ai.philosophy
Subject: Re: intelligent organic goop
Message-ID: <1992Apr01.210117.26523@spss.com>
Date: 1 Apr 92 21:01:17 GMT
References: <7341@uqcspe.cs.uq.oz.au> <1992Mar31.182905.25732@cs.ucf.edu>
Organization: SPSS Inc.
Lines: 34
Nntp-Posting-Host: spssrs7.spss.com

In article <1992Mar31.182905.25732@cs.ucf.edu> clarke@acme.ucf.edu (Thomas Clarke) writes:
>I believe Searle would not deny the possibility of artificial people with  
>feelings.  Rather if it pleases you to dissect your personal slave, then you  
>would find not an nth generation digital computer, but some sort of analog  
>goop.   Searle would have it that the analog goop would have to be organic just  
>like the organic sponge in your head.

I don't think that's his position.  He explicitly states (Sci Am 1/1990)
that "it might be possible to produce a thinking machine out of... 
silicon chips or vacuum tubes."  His argument is that it wouldn't
be thinking "just by virtue of implementing a computer program".

>I am inclined to agree with Searle, but think that he goes too far in limiting  
>things to organic goop.  The slave would contain some sort of structure whose  
>detailed method of functioning would not be unknowable.  That is, it would not  
>be like a machine or a simple software system where the results of changing a  
>particular part is predictable.  There would be no way to improve performance  
>by upping the clock rate or adding more memory because the clock and the memory  
>could not be explicitly identified.  Were the AI implemented entirely in  
>software, then the heap or the device drivers would be unidentifiable.
>
>If such parts are identifiable, then I suspect, the system could not be visibly  
>intelligent.  

It sounds like you're saying that you can't call a system intelligent if you
can figure out how it works.  Sounds barmy to me.  If we knew exactly how
the brain worked, would be cease to be intelligent?  If we managed to
get this AI without identifiable parts working (neat trick, that), would
it cease to be an AI if we came up with a method to identify the parts
after all?

Or perhaps you're saying that an AI is unlikely to be procedural?  Artificial
neural networks, for instance, share with brains the interesting property
that it can be quite a job to figure out what the heck they're doing.


