Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!gatech!rutgers!argos.montclair.edu!hubey
From: hubey@pegasus.montclair.edu (H. M. Hubey)
Subject: Re: Bag the Turing test (was: Penrose and Searle)
Message-ID: <hubey.786822365@pegasus.montclair.edu>
Sender: root@argos.montclair.edu (Operator)
Organization: SCInet @ Montclair State
References: <1994Dec5.152724.10065@oracorp.com>
Date: Wed, 7 Dec 1994 17:46:05 GMT
Lines: 85

daryl@oracorp.com (Daryl McCullough) writes:

>The Turing Test is not about AI in general, it is about a very
>restricted area of AI, the creation of artificial minds, or perhaps
>artificial personalities. Very little (if any) current AI research is
>directed at meeting this goal, perhaps because it is too hard, or
>because nobody is interested, or because it is believed that certain
>groundwork (pattern recognition, problem solving, etc.) must be laid
>ahead of time. One would only expect the Turing Test to come into play
>when researchers are ready and willing to work on artificial
>personalities.

It's not done because it's too hard. The early rosy predictions
(and the rosy time schedules) have been seen to be too good to be true.

Common sense is very difficult to give to a machine. It's probably
the last thing machines will have. Before that time, they'll be
experts in every field. And if enough expert systems have been written
(who knows maybe millions) and then some massive government project
underwrites the seamless integration of all these ES's then maybe some
machine that seems like a cranky human may be created.

All the things that people thought made people smart (like mathematics,
chess, disease diagnoses, etc) turn out to be easier to simulate than
the unstructured general intelligence.


Just think about little children. They try to bite into and eat 
every object they find. It takes years before they learn to recognize
by sight what is food and what is not. Orientation of objects in 3-D
is a gigantic problem. And then the problem of even recognizing simple
things (like phonemes in speech recognition) turns out to be much
more complex and difficult than we imagine. Clearly the brain is
doing things in a different manner than the way we try to program
computers.


>willing to bet that there is not a single person on this newsgroup who
>would not come to accept a program as intelligent and conscious if the
>program were capable of carrying on a lively, insightful discussion
>about politics, morality, love, family and artificial intelligence.

If such a program could be written .... :-)..

This is the main reason why the Searle CR is a red herring. It assumes
that this can be done and then still claims that the machine can't
understand because it's not human.  By his argument he's giving us
his definition of intelligence. But the AI researchers are using a
different definition. It surprises me that the word "artificial" keeps
escaping Searle. "Artificial" gold is not gold, it looks like one and
only the experts can tell using special techniques.

>it is about particularly those aspects of intelligence that are
>revealed through what we call "personality". A TT-passing machine may
>very well be a complete klutz at fixing a bike (it may not even have
>any hands or eyes with which to do so). However, it's also the case
>that there are people who are complete klutzes, and nobody denies
>their "personhood" on that basis.

This stuff about fixing bikes, and reading maps is also red herring.
First off, in order for the machine to "read a map" it has to have eyes
etc and then we can tell it doesn't have eyes. On the other hand if the
map were machine-readible, then a machine that can converse intelligently
with us into the thinking that he has an idea of what the map represents
along with all the entities inhabiting the map, cannot be said to be 
unable to read a map. The stuff about fixing bikes is along the same lines.
is it required to have the machine possess the manual dexterity of humans?
If not, can it, then, simply diagnose the problem after "seeing" (in a manner
of speaking as above with the map) what the state of the bike looks like?

They all reduce to the same thing. We humans are the best instrument
for measuring intelligence since we possess it.

And that's exactly what the TT captures.







--
						-- Mark---
....we must realize that the infinite in the sense of an infinite totality, 
where we still find it used in deductive methods, is an illusion. Hilbert,1925
