From newshub.ccs.yorku.ca!ists!torn.onet.on.ca!utgpu!news-server.csri.toronto.edu!rutgers!micro-heart-of-gold.mit.edu!news.media.mit.edu!nlc Mon Jun 15 16:05:00 EDT 1992
Article 6240 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!torn.onet.on.ca!utgpu!news-server.csri.toronto.edu!rutgers!micro-heart-of-gold.mit.edu!news.media.mit.edu!nlc
>From: nlc@media.mit.edu (Nick Cassimatis)
Newsgroups: comp.ai.philosophy
Subject: Re: Vitalism and Intellectuaism
Message-ID: <1992Jun13.063902.2610@news.media.mit.edu>
Date: 13 Jun 92 06:39:02 GMT
References: <1992Jun10.041831.16727@news.media.mit.edu> <1992Jun10.131608.23965@cs.ucf.edu>
Sender: news@news.media.mit.edu (USENET News System)
Organization: MIT Media Laboratory
Lines: 87

In article <1992Jun10.131608.23965@cs.ucf.edu> clarke@acme.ucf.edu (Thomas Clarke) writes:
>> >Cassimatis) writes:
>> >> I'm
>> >> willing to bet quite a bit that these problems won't be solved by
>> >> people who expend most of their "mental energies" to solve Searle's
>> >> puzzle.
>> >It seems to me important to establish, if possible, what the fundamental
>> >limits are.  We already know time should not be wasted on the halting 
>> >problem.
>> 
>> Yes, but the halting problem is well defined (it's part of math --
>> which means that it is well-defined by definition!)  The rest of my
>> post was an attempt to show that the putative limits of understanding
>> and so forth are not well defined.
> 
>Let's define them, if possible!  A good job for collective net
>intelligence. No?

Yes, I think that this would be a wonderful task.  But most of the
discussion here doesn't even bother to do so.  (I've even seen "I
won't play the definition game.")  There is a criticism of AI that
goes something like the following: "Define Intelligence.  I don't have
a definition.  So how can you work on something that is not
well-defined?"  The response to this is that we are trying to get
comuters to use language, solve problems and all sorts of other things
that are not hard to define.  Now if there were a defintion of
intelligence that included a procedure for solving the halting problem
(or any other unsolvable problem.)  Then we would have set a genuine
limit on a research program called artifical intelligence.  But this
isn't really that interesting considering nobody was working on the
halting problem in the first place (since we new it was unsolvable
before we got our definition for intelligence.)

Now lets turn this on Artifical Consciousness.  Since we don't really
have a good definition of consciousness, we can only hope do build
machines that exhibiti what we would say required consciousness for a
human.  If we did settle on a definition, then we could see if there
were any mathematically imposed limits.  But, I doubt we could get a
good definiton of consciousness untill we had more precise terms about
cognition.  Beforethen, all we can do is mimick "conscious behavior".

Since I have yet to have been given a convincing and precise defintion
of understanding, intelligence, consciousness, etc., I (like the good
Math major that I am) won't let Searle's (or anyone elses) proofs
alter my activites.

How can people let a priori arguments from un- or ill- defined terms
changed their behavior? 

>> Now think about the debate over whether computers can *really* be
>> intelligent or *really* understand.  How is it different from the
>> question of wether a virus is alive or not?
>
>I think biochemists can reproduce a virus from basic chemicals right
>now.  Make up a strand of DNA/RNA with right sequence, make up some
>proteins for the coat (this might actually be harder), put all in
>test tube and shake.  Voila!  Virus particles.
>
>No way can they yet make a functioning cell.  I'll bet that they'll
>never make one by puting chemicals in a test tube and shaking!   

It probably won't be so simple, but why do you think that they'll
never be able to make up a cell?  Is there any principled reason to
think that this won't be possible some day?  Even if we couldn't
reproduce a cell molecule by molucule, why couldn't we build something
that did the same sorts of things that cells do?

>This is one difference between viruses and "living" cells. I see
>a similar difference between intelligence and conciousness.

The difference is only one of complexity.

>Functions of life (reproductive structure) are "easy", 
>life itself is hard.  In the same way conscious functions (chess 
>playing, specific disease diagnosis) are proving doable, but 
>consciousness, thinking itself, is much harder.  
>
>I'll bet the techniques used in computer chess and expert systems 
>will not play a roll in achieving artificial conciousness.

Even if this is so, so what?  To say that AI or AC won't be achieved
by the resent tools alone (which I'm pretty certain of) is not to say
that the whole research program is bancrupt.  Galileo didn't have
calculus, but this didn't mean that he was wrong for aiming at a
mathematical model of mechanis (assuming that he was.)

-Nick


