Newsgroups: comp.ai
Path: cantaloupe.srv.cs.cmu.edu!bb3.andrew.cmu.edu!newsfeed.pitt.edu!gatech!swrinde!howland.reston.ans.net!vixen.cso.uiuc.edu!uchinews!not-for-mail
From: espogrel@midway.uchicago.edu (Pogie)
Subject: Re: Fundamental AI issues
X-Nntp-Posting-Host: kimbark.uchicago.edu
Message-ID: <Dp03Mt.M4C@midway.uchicago.edu>
Sender: news@midway.uchicago.edu (News Administrator)
Organization: Antarctic Flying Penguin DEATH!
References: <4jeihf$bhu@nntp1.best.com>
Date: Thu, 28 Mar 1996 23:23:16 GMT
Lines: 63

In article <4jeihf$bhu@nntp1.best.com>,
Jose Castejon <castejon@LosGatos.marben.com> wrote:
>
>theoretical mathematical arguments why universal Turing machines might be
>incapable of reproducing typical human traits (common sense, intuition,
>consciousness, intelligence in general), in the sense that characteristics 
>like these might involve non-computable (non-algorithmic) ingredients which 
>as such would be beyond the reach of universal Turing machines, by definition.

I would first argue that there is no such thing as 'non-algorithmic ingredients
which are beyond the reach of universal Turing machines'. Or simply, that
there is nothing which cannot be codified into a set of rules which govern 
that behavior or characteristic. Take 1st order logic, or prepositional 
calculus--there is *nothing* I can think of that cannot be defined in terms
of one of these two rule paradigms given that we know the world we're defining
well enough.  As for reproduction of common sense, intuition, conciousness,
or intelligence 'in general', we first have to understand what these things
are in humans before we can ever create a machine to emulate them.  As a 
psychologist first and an AI dabbler second, I am greatly inclined to laugh
at authors claiming that we cannot create machines with the ability to feel
human emotions, or think as humans do: Of course we cannot, we don't even 
know how they work in humans, how can we possibly create an intelligence 
with emotions? Even further, intelligence is relatively undefinable, that
is why Turing essentially proposed a new question, 'can machines pretend
to be human.'

Machines have already proven themselves capable of reproducing human traits,
many succesfully reproducing the trait while using the same cognitive 
mechanism humans do (or so we think). A good example would be the Neural Net
simulation of the U-shaped learning curve in learning of past-tense verbs.
Rumelhart and McClelland, and many others, have all created machines that 
learn just as children do, and with the same success or failure, using 
mechanisms that are arguably the same as those cognitive mechanisms allowing
children to learn.

At the root of the argument, I think that what you're proposing is 
fundamentally flawed because these human traits you are speaking of do not,
neccessarily, have definitions. We don't yet know how these traits are
manifested in the human mind. Put another way, yes, we cannot code 
certain human traits into machines, but not because they cannot be coded
by virtue of being _human_ or _special_, but simply because we do not 
understand them yet as traits, and therefore cannot code them into an
intelligent machine.

>
>	I am sure that many of you already know that I am referring to
>some well publicized ideas by R. Penrose. Now if he is wrong I think his
>views should be refuted with arguments other than hand waving, proclaiming
>that he is ignorant of the AI ways, period, and similar stuff. Penrose is
>certainly not an AI professional, but just as certainly is he a world
>stature mathematician whose views on topics like the current one can hardly
>be brushed off as the rantings of a crackpot.

I am not actually familiar with Penrose, though I'll be sure to make myself
familiar. I hope I'm not just hand-waving :)

Pogie

-- 
-----------------Hey CDA, Fuck You! (Go ahead, arrest me)-------------------
Eric Pogrelis                                     
espogrel@midway.uchicago.edu                      
The University of Chicago... Where fun comes to die
