Newsgroups: comp.ai.philosophy,comp.ai,comp.robotics
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!ix.netcom.com!netcom.com!jqb
From: jqb@netcom.com (Jim Balter)
Subject: Re: Minsky's new article (was: Roger Penro
Message-ID: <jqbCzG3K0.85K@netcom.com>
Organization: NETCOM On-line Communication Services (408 261-4700 guest)
References: <39d8g2$dlm@coli-gate.coli.uni-sb.de> <39eaqk$nn9@cantaloupe.srv.cs.cmu.edu> <CzDqLI.686@cogsci.ed.ac.uk>
Date: Fri, 18 Nov 1994 04:09:36 GMT
Lines: 136
Xref: glinda.oz.cs.cmu.edu comp.ai.philosophy:22216 comp.ai:25290 comp.robotics:15440

In article <CzDqLI.686@cogsci.ed.ac.uk>,
Jeff Dalton <jeff@aiai.ed.ac.uk> wrote:
>In article <39eaqk$nn9@cantaloupe.srv.cs.cmu.edu> hpm@cs.cmu.edu writes:
>>
>>These seem silly in hindsight.  AI critics of 1994 will seem equally
>>silly.  A future Matthews (while spleening on some future proposal)
>>will note how critics of AI were just not paying attention in school,
>>when it was obvious in 1994 that machines could think.
>
>How will you ever show that they're conscious?

That's the "other minds" problem, Jeff; how will you ever show that
Jeff Dalton is conscious?  That depends to some degree upon the nature of
your definition of "conscious".  Physicians trying to determine whether
someone has regained consciousness have ways to tell.

But the statement was "machines could think", and the focus here has been
on behavior, such as flies avoiding obstacles.  We can talk about *that*
without wading through the metaphysical mud.

>Sure, it may seem obvious that they are conscious; but what seems
>obvious might still be wrong.

You seem to have missed the point, which was sarcasm; the imputed obviousness
is strictly via hindsight.  The fact that seeming obviousness may be wrong is
yet another argument against Matthews et. al. distinguishing between AI and
such things as space flight based upon the supposed obviousness of the latter.
Space flight may have been obvious to Goddard, but it wasn't obvious to
others.  Steady state cosmology may have been obvious to Hoyle and Bondi, but
not others.  The absurdity of plate tectonics may have been obvious to many,
but not to Wegener.  It was obvious to Einstein that God doesn't play dice,
but not to Bohr.  The dominance of nature over nurture in intelligence may
have been obvious to Burt, but not to Gould.  That mechanistic models have no
room for "the experience of yellow" may be obvious to some, but not to others.
That the arguments of Lucas, Searle, and Penrose have gaping holes and that in
fact there cannot be a valid theoretical argument that humans have some
abstract quality that algorithmic machines cannot have is obvious to me but it
is not obvious to you.  Obviousness is not an argument.

>This is very different from space flight, where there are
>straightforward empirical tests.

You seem to have your tenses confused.  There was no empirical test that
space flight was possible.  There were theoretical arguments that space flight
was possible, as well as theoretical arguments that it was not.  Some of these
arguments were based upon faulty assumptions and faulty logic.  There are
theoretical arguments that thinking, consciousness, and/or "intelligent"
behavior, from the level of flies to humans, is possible by mechanisms that do
not employ meat or quantum tubules, and there are theoretical arguments that
they are not possible.  Some of these arguments are based upon faulty
assumptions and faulty logic.  Perhaps all of them.

As for empirical tests, some people believe that there are empirical tests to
determine (with Humean limitations) whether something can navigate as well as
a fly, whether something thinks, and, yes, whether something is conscious.
But the argument that there is no unassailable proof that another mind has an
"actual experience of yellow" is just a way to confuse the issue by
introducing a tricky metaphysical/semantic/linguistic problem that has no
bearing on intelligence, artificiality, or the development of AI.

>>    Why, by then,
>>machines could read written text, understand speech, reason about
>>complex subjects, navigate through the world, beat nearly everyone in
>>intellectual games, not to mention accomplishing mathematical feats
>>impossible for humans.  And they were improving on all fronts at
>>break-neck speed--each year some new barrier fell.  And anyway, it was
>>obvious by then that intelligent mechanisms were possible, since the
>>biologists had shown conclusively that humans themselves were
>>mechanisms cobbled together by the trials and errors of Darwinian
>>evolution. 
>
>So far as I can tell, the Penrose arguments say nothing against
>*artificial* intelligence, only digital computer intelligence.

Non sequitur.  You seem to have your threads confused.  

>Even Searle allows that humans are machanisms.  He just thinks
>it matter what the mechanism is, not just the externally observable
>behavior.

It obviously matters to him, but so what?  You would make more sense by noting
the sense in which he claims it matters, namely that algorithmic mechanism,
while perhaps sufficient to produce behavior identical to that of humans,
is not sufficient to produce some undefined attribute called "understanding".
The basis for this claim is a  polemic known as the "Chinese Room" argument.
For those who think they have found some flaw in this argument, it really
doesn't matter much what Searle thinks (other than for its corrupting
influence).

I, taking a Searlian stance, deny that Searle understands anything at all.
None of his arguments will convince me otherwise, since those are just
behavior that could be produced algorithmically.  He can point to his insides,
but I know of no demonstration that the particular juxtaposition of his
particular components necessarily produces understanding; certainly there are
plenty of very similar configurations that do not.

If I return to sanity for a moment, I will admit, based upon my normal
behavior-based meaning (a matter of usage, per Neil Rickert) of
"understanding", that Searle understands some things.  But certainly not the
issue that he has become famous for.

>Why such positions excit so much hostility is a mystery to me.

So whose fault is that?  When people say something like this, they usually mean
that the people they are referring to must be irrational.  Perhaps more humility
is in order, Jeff.

>So what if you have to do some quantum machanical stuff rather
>than just run programs?

So what if Jeff Dalton's mother wears army boots?"
"So what if <arbitrary counterfactual>?"  Is this just flame bait, Jeff?

>Why is that such a flame-generating
>issue?

Why was Rupert Sheldrake's theory of morphogenetic fields such a
flame-generating issue?  Why were Von Daniken's claims that every anthropologist
was deluded?  Why were Velikovsky's claims treated hostilely?

There are two basic reasons, Jeff.  One, there are the claims, sometimes
explicit and sometimes implicit, that every researcher and thinker in the field
is a fool or hidebound or a lacky of the hierarchy or has wasted their life on
pointless and mistaken pursuits.  Regardless of the validity of these claims
or how mild or strong the language or the quality of the person making them,
there is an emotional response.  Not too mysterious.  Two, scientists
intuitively understand the importance of Occam's Razor and accurate models
to their pursuits.  The Church-Turing thesis has great explanatory power,
and challenges to it must be taken very seriously.  For similar reasons
psychophysics (paranormal abilities, the Copenhagen Interpretation,
Sarfattiism, etc.) gets such a strong reaction, because it has implications
for basic models.  In the past, QM, Big Bang, plate tectonics, relativity,
etc. also generated many flames.  (Of course, in this case, you have the
additional problem that Penrose is wrong. (:-)?)  Why is that mysterious?
-- 
<J Q B>
