Newsgroups: comp.ai.philosophy,comp.ai,comp.robotics
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!news.mathworks.com!uhog.mit.edu!bloom-beacon.mit.edu!gatech!swrinde!pipex!uknet!festival!edcogsci!jeff
From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Subject: Re: Minsky's new article (was: Roger Penro
Message-ID: <Czou9A.110@cogsci.ed.ac.uk>
Sender: usenet@cogsci.ed.ac.uk (C News Software)
Nntp-Posting-Host: bute-alter.aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
References: <39eaqk$nn9@cantaloupe.srv.cs.cmu.edu> <CzDqLI.686@cogsci.ed.ac.uk> <jqbCzG3K0.85K@netcom.com>
Date: Tue, 22 Nov 1994 21:27:10 GMT
Lines: 252
Xref: glinda.oz.cs.cmu.edu comp.ai.philosophy:22463 comp.ai:25405 comp.robotics:15578

In article <jqbCzG3K0.85K@netcom.com> jqb@netcom.com (Jim Balter) writes:
>In article <CzDqLI.686@cogsci.ed.ac.uk>,
>Jeff Dalton <jeff@aiai.ed.ac.uk> wrote:
>>In article <39eaqk$nn9@cantaloupe.srv.cs.cmu.edu> hpm@cs.cmu.edu writes:
>>>
>>>These seem silly in hindsight.  AI critics of 1994 will seem equally
>>>silly.  A future Matthews (while spleening on some future proposal)
>>>will note how critics of AI were just not paying attention in school,
>>>when it was obvious in 1994 that machines could think.
>>
>>How will you ever show that they're conscious?
>
>That's the "other minds" problem, Jeff; how will you ever show that
>Jeff Dalton is conscious? 

I know it's the other minds problem.  Note that there's no
analogous "space flight problem".

> That depends to some degree upon the nature of
>your definition of "conscious".  Physicians trying to determine whether
>someone has regained consciousness have ways to tell.

Well, that's one meaning of "conscious".

>But the statement was "machines could think", and the focus here has been
>on behavior, such as flies avoiding obstacles.  We can talk about *that*
>without wading through the metaphysical mud.

Sure, but then critics are muddy, rather than silly. :->

If critics say "machines can't do <insert a ref to some behavior>",
their hash may well be settled in the future.  But when the critics
are talking about whether machines will ever have subjective
experience, then straightforward, "look, there they are, doing it"
refutations may never be possible.  Nonetheless, a change in thinking
may occur, so that it's just *obvious* that (some) machines have
subjective experiences, that it's "like something" to be them,
that they're not "zombies", etc; or thinking may change so that
these issues are no longer considered.  But that could happen
even if these people were wrong to think that way.

>>Sure, it may seem obvious that they are conscious; but what seems
>>obvious might still be wrong.
>
>You seem to have missed the point, which was sarcasm; the imputed obviousness
>is strictly via hindsight.

So what point have I missed?

>                           The fact that seeming obviousness may be wrong is
>yet another argument against Matthews et. al. distinguishing between AI and
>such things as space flight based upon the supposed obviousness of the latter.
>Space flight may have been obvious to Goddard, but it wasn't obvious to
>others.  Steady state cosmology may have been obvious to Hoyle and Bondi, but
>not others.  [Etc]

So?  I'm perfectly happy to have it cut both ways, so long as it
at least cuts the way I want (ie, against those in the future who
think it's obvious that machines can think).

>            That mechanistic models have no
>room for "the experience of yellow" may be obvious to some, but not to others.

This "room for" stuff is not from anything said by Sean Matthews.

>That the arguments of Lucas, Searle, and Penrose have gaping holes

So what are the gaping holes?  And if they're so gaping, why can't
you easily convince the people who don't see them that they're wrong.

>        and that in
>fact there cannot be a valid theoretical argument that humans have some
>abstract quality that algorithmic machines cannot have is obvious to me but it
>is not obvious to you.  

What's this "abstract quality"?  BTW, there's nothing in the Searle
or Penros epositions against *artificial* intelligence.  Why are
algorithmic machines so important?  Why are they so fiercely
defended?  I find this very odd.

>   Obviousness is not an argument.

Which I never said it was.
>
>>This is very different from space flight, where there are
>>straightforward empirical tests.
>
>You seem to have your tenses confused.  There was no empirical test that
>space flight was possible.  

You are trying to make too much turn on a completely minor point.

When you do get space flight, there's a straightforward empirical
test.  Where's the analogous test for subjective experience?
Without such a test, all these "in the future, it will be obvious
we were right" analogies fall apart.

>As for empirical tests, some people believe that there are empirical tests to
>determine (with Humean limitations) whether something can navigate as well as
>a fly, whether something thinks, and, yes, whether something is conscious.

People will believe all sorts of nonsense. 

>But the argument that there is no unassailable proof that another mind has an
>"actual experience of yellow" is just a way to confuse the issue by
>introducing a tricky metaphysical/semantic/linguistic problem that has no
>bearing on intelligence, artificiality, or the development of AI.

Good thing I never made that argument then, eh?

BTW, I;d be completely happy if everyone said "let's set these
tricky metaphysical/semantic/linguistic problems aside and try
to get machines to *do* things".  But nooooooooooooooooooooooo!
(Some) AI folk have to debate these tricky metaphysical/semantic/
linguistic problems and heap abuse on those who disagree with
them.

>>>    Why, by then,
>>>machines could read written text, understand speech, reason about
>>>complex subjects, navigate through the world, beat nearly everyone in
>>>intellectual games, not to mention accomplishing mathematical feats
>>>impossible for humans.  And they were improving on all fronts at
>>>break-neck speed--each year some new barrier fell.  And anyway, it was
>>>obvious by then that intelligent mechanisms were possible, since the
>>>biologists had shown conclusively that humans themselves were
>>>mechanisms cobbled together by the trials and errors of Darwinian
>>>evolution. 
>>
>>So far as I can tell, the Penrose arguments say nothing against
>>*artificial* intelligence, only digital computer intelligence.
>
>Non sequitur.  You seem to have your threads confused.  

What is the relevance of this:

  it was obvious by then that intelligent mechanisms were possible,
  since the biologists had shown conclusively that humans themselves
  were mechanisms cobbled together...

Since no one's arguing against mechanisms (so broadly considered)
being intelligent, this is a rather reddish herring.  Hence my
"non sequitur".

>>Even Searle allows that humans are machanisms.  He just thinks
>>it matter what the mechanism is, not just the externally observable
>>behavior.
>
>It obviously matters to him, but so what?

So he's not arguing against the possibility of intelligent
mechanisms.

(He also has the wit to see that the Turing Test is bogus,
but that's a different discussion.)

>  You would make more sense by noting
>the sense in which he claims it matters, namely that algorithmic mechanism,
>while perhaps sufficient to produce behavior identical to that of humans,
>is not sufficient to produce some undefined attribute called "understanding".
>The basis for this claim is a  polemic known as the "Chinese Room" argument.

And an argument about syntax and semantics.

>For those who think they have found some flaw in this argument, it really
>doesn't matter much what Searle thinks (other than for its corrupting
>influence).

Who's the enemy?  Who's arguing against the possibility of
intelligent mechanisms, if even Searle is not doing so?

>I, taking a Searlian stance, deny that Searle understands anything at all.
>None of his arguments will convince me otherwise, since those are just
>behavior that could be produced algorithmically.  He can point to his insides,
>but I know of no demonstration that the particular juxtaposition of his
>particular components necessarily produces understanding; certainly there are
>plenty of very similar configurations that do not.

Such as?

>>Why such positions excit so much hostility is a mystery to me.
>
>So whose fault is that?  When people say something like this, they
>usually mean that the people they are referring to must be irrational.

Where did you ever get that idea?

>Perhaps more humility is in order, Jeff.

I will if you will.

>>So what if you have to do some quantum machanical stuff rather
>>than just run programs?
>
>So what if Jeff Dalton's mother wears army boots?"
>"So what if <arbitrary counterfactual>?"  Is this just flame bait, Jeff?

Can I take it, then, that you're not planning to explain why
"such positions excit so much hostility"?

>>Why is that such a flame-generating
>>issue?
>
>Why was Rupert Sheldrake's theory of morphogenetic fields such a
>flame-generating issue?  Why were Von Daniken's claims that every 
>anthropologistwas deluded?  Why were Velikovsky's claims treated hostilely?

And saying consciousness requires some quantum-effects is like
that, is it?

>There are two basic reasons, Jeff.  One, there are the claims, sometimes
>explicit and sometimes implicit, that every researcher and thinker in
>the field is a fool or hidebound or a lacky of the hierarchy or has 
>wasted their life on pointless and mistaken pursuits. 

*Every* researcher and thinker?  Most of the AI researchers and
thinkers I know aren't doing stuff that would be in any trouble
(except maybe politically) even if Penrose and Searle were right.

> Regardless of the validity of these claims
>or how mild or strong the language or the quality of the person making them,
>there is an emotional response.  Not too mysterious.

Is this supposed to be interestingly different from saying they're
irrational?  (Which I never did, BTW.)

>  Two, scientists
>intuitively understand the importance of Occam's Razor and accurate models
>to their pursuits.  

So when a machine passes the TT, will they insist it is conscious?
Will they not prefer explanations that don't require that extra
ingredient?

>The Church-Turing thesis has great explanatory power,
>and challenges to it must be taken very seriously.

What does the C-T thesis have to do with this?

>     For similar reasons
>psychophysics (paranormal abilities, the Copenhagen Interpretation,
>Sarfattiism, etc.) gets such a strong reaction, because it has implications
>for basic models.  In the past, QM, Big Bang, plate tectonics, relativity,
>etc. also generated many flames.  (Of course, in this case, you have the
>additional problem that Penrose is wrong. (:-)?)  Why is that mysterious?

If Penrose hadn't attached AI, but had instead confined himself
to his speculations on the physics of consciousness, would there
be these big flame fests in comp.ai.philosophy.  No.

-- jeff


