From newshub.ccs.yorku.ca!torn!cs.utexas.edu!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!usenet.ucs.indiana.edu!bronze.ucs.indiana.edu!chalmers Wed Oct 14 14:58:39 EDT 1992
Article 7212 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai:4734 comp.ai.neural-nets:4665 comp.ai.philosophy:7212 sci.psychology:4800
Newsgroups: comp.ai,comp.ai.neural-nets,comp.ai.philosophy,sci.psychology
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!usenet.ucs.indiana.edu!bronze.ucs.indiana.edu!chalmers
>From: chalmers@bronze.ucs.indiana.edu (David Chalmers)
Subject: Re: Human intelligence vs. Machine intelligence
Message-ID: <Bvz218.5B6@usenet.ucs.indiana.edu>
Sender: news@usenet.ucs.indiana.edu (USENET News System)
Nntp-Posting-Host: bronze.ucs.indiana.edu
Organization: Indiana University
References: <26608@castle.ed.ac.uk <burt.718665726@aupair.cs.athabascau.ca> <OZ.92Oct10164812@ursa.sis.yorku.ca>
Date: Sun, 11 Oct 1992 19:27:08 GMT
Lines: 210

In article <OZ.92Oct10164812@ursa.sis.yorku.ca> oz@ursa.sis.yorku.ca (Ozan Yigit) writes:

>If you are interested in a refutation of Lucas's G"odel argument,
>see [eg] R. Kirk's article in Synthese [1].
>
>[1] Roland Kirk
>    Mental Machinery and Godel
>    Synthese, 66, 437-452. 1986

Is this really by "Roland Kirk"?  I had presumed it was by Robert Kirk,
the British philosopher.  In any case, I don't think this paper is
a particularly good refutation of Lucas (although in general, Kirk is
an interesting and underrated philosopher).  It takes the line that
although humans may be modeled by a formal system, the output of that
formal system won't be sentences of arithmetic -- they'll rather be
at the level of actions, or noises, or whatever, so Godel's theorem
doesn't apply.  (A similar line to this is taken by Dennett in
_Brainstorms_, and also by Hofstadter in _GEB_.)

This misses the point that if we had a formal system that modeled the
*actions* of a human, we could straightforwardly use it to generate
another formal system which took sentences of arithmetic as input and
produced "true" or "false" (or "don't know") as an output.  This
new system -- call it M' -- will take a sentence of arithmetic as
input, and then proceed to use the original system (M) to simulate
the process of giving that sentence to the human, and asking them
whether it is true or false, requiring them to check off the appropriate
box on an answer sheet when they're done.  Presumably there will be
a straightforward decision procedure to tell whether a box is checked
off or not, so M' can use this to return an output of "true" or "false"
according to the human's judgment.  (Actually, a "don't know" output
isn't required, if we allow that M' can run forever, which is fine for
the purposes of the argument.)

The Godel/Lucas argument can now be run against the new system M',
so this point is an insufficient refutation of the argument.

>     I find it interesting tht people who support what is called "strong AI"
>   are now trying to get out of the Godel argument against this by saying
>   that Godel's theorems only apply to consistent systems and it is clear
>   that any machine which matches human capacities would have to be running
>   on some sort of inconsistent system.
>
>I do not know what you are talking about. Do you have references
>to any books or articles that contain such an argument made by a
>supporter of strong AI?

Well, Minsky and McDermott have made a similar argument on the net,
and Hofstadter makes an argument like this in his book (humans are
inconsistent, so the Godel argument doesn't apply).  I don't think
this argument is necessarily *wrong*, but I do think that it proves
too much.  If that were all that was wrong with the Lucas argument,
then it would establish a substantive conclusion: that inconsistency
is a vital part of human mathematical competence, allowing humans
to see certain truths that no consistent system could see.  Now
perhaps this is true, but I certainly don't accept it (to be sure
inconsistent-seeming creative processes maybe be part of our
competence in the process of discovery, but I don't think they play
an essential role in the final process of justification), and I
certainly don't think that it has been established.  This "refutation"
is therefore too kind to the Godel argument, in allowing that it
establishes such a strong conclusion.  I don't think that such a
conclusion is established, therefore to find out what's really
wrong with it, one has to look elsewhere.

The key thing that's wrong with the argument, as Daryl McCullough
has pointed out, and as was pointed out by Putnam a few years
before Lucas's argument ever appeared (!), is that in supposing
we have the ability to determine the truth of the "Godel sentence"
of any formal system, it supposes that we have the ability to
determine whether any formal system is consistent.  There's no
prior reason to suppose that we have this ability in general, any
more than there is any prior reason to suppose that we can determine
all the truths of arithmetic.

This is really all that needs to be said in order to refute Lucas,
but there are a few more interesting twists that come up in ansering
various counterarguments.  For example, in response to the above kind
of reply, Lucas has generally asserted: "but this formal system is
being put forward as a model of *me*, and *I* am certainly consistent".
(Let's grant him the latter point, and let's even grant him the
point that he knows he is consistent (through some introspective
certainty?), again in the interest of not allowing the Godel
argument to establish any substantive conclusion -- if it had the
conclusion no being with at least our mathematical competence
could have rational justification for the belief that it was
consistent, that would be a strong conclusion.)  Therefore Lucas
claims that we can rule out a priori the possibility that the
system bein put forward is inconsistent.  (Recall the dialectical
nature of Lucas's argument: he claims that he can out-Godelize any
machine that is put forward as a model of him, showing that he cannot
be equivalent to that machine.)

This is from a draft of a paper by Lucas called "Mind, Machines,
and Godel: A Retrospect", written in the last year or two -- does
anybody know if that has been published anywhere?

  Before wasting time on the mechanist's claim, it is reasonable
  to ask him some questions about his machine to see whether his
  seriously maintained claim has serious backing.  It is reasonable
  to ask him not only what the specification of the machine is, but
  whether it is consistent.  Unless it is consistent, the claim
  will not get off the ground.  If it is warranted to be consistent,
  then that gives the mind the premise it needs.  The consistency
  of the machine is established not by the mathematical ability of
  the mind but on the word of the mechanist.  The mechanist has
  claimed that his machine is consistent.  If so, it cannot prove
  its Godelian sentence, which the mind can nonetheless see to be
  true: if not, it is out of court anyway.

There are any number of things that are wrong with this argument
(it's fun trying to isolate them -- read no further if you want
this fun).  Firstly and most obviously, even the machine itself
(call it M) will be able to claim "*if* M is consistent, then
the Godel sentence of M is true".  (As the Godel sentence of M
is straightforwardly equivalent to M's consistency.)  But this
is all Lucas is claiming he can do -- so he's doesn't have any
abilities over and above M's.  Any supposed lack in the machine's
abilities lay only in its inability to see its own consistency
for itself -- but Lucas is here acknowledging that he can't see
this consistency for himself either, so there's no relevant
difference.  A second related point is that "If so, it cannot
prove its Godelian sentence, which the mind can nonetheless see
to be true" is a false claim; it should be "If so, it cannot
prove its Godelian sentence, which the mind can nonetheless see
to be implied by its consistency".  But that's no difficulty
for mechanism.

A second problem, obviously, lies in the claim that the mechanist
should know whether the machine is consistent or not -- certainly
such a claim doesn't follow from the truth of mechanism.  (Mechanism
doesn't even imply that anyone could *know* what system I was;
it just asserts that I *am* such a system.)

Lucas goes on to counter objections like these by saying something
like: "I can *at least* ask the mechanist whether the system is
capable of demonstrating its Godel sentence or not.  If it is, then
it's inconsistent and I can go home.  If it's not, then there's
at least one sentence it can't demonstrate, so it must be consistent."
Let's grant Lucas that he can tell whether the machine can demonstrate
its Godel sentence (maybe he does a quick run, or something).  Even
so, this is obviously a terrible answer, as the inference in the
last sentence is hopeless.  To suppose that any inconsistent machine
would demonstrate every sentence in the language is to suppose that
the class of machines is limited to those that work by directly
following the rules of logic -- direct implementations of first-order
theorem-provers, perhaps.  But obviously the class of machines is
not so limited, and almost any machine with a halfway plausible
cognitive architecture wouldn't work anything like that.

There is still one thing that I've skipped over, though, and that
is this.  Lucas might reasonably claim that if mechanism is true,
then we will be able to *know* what formal systems simulate us,
e.g. by examining our brain mechanisms.  If we did, then we could
know that the system put forward before us was a model of ourselves,
and therefore we could know that it was consistent (recall that by
assumption we know that we are consistent).  Therefore we would
be able to demonstrate its Godel sentence, which is *our* Godel
sentence.  But this is precisely what we could not do.  So we
have a contradiction, and mechanism is false.

This kind of argument has led some people to claim that in fact
we *can't* know what system we are.  Benacerraf makes such a
point in one of his papers, and Torkel Franzen made such a point
on the net last year (I think his claim was that there's no
justification for the claim that we can know what system we are,
and this is enough to refute the argument).  This may be well and
good, but it again seems to be a case of the argument proving
too much -- if this were all that were wrong with the argument,
then it would *establish* that we couldn't know what precisely
what systems could model us, and that seems to be a strong 
conclusion indeed for such an argument to prove (I certainly
don't believe this conclusion, and nor, presumably, do most
cognitive scientists).  So again, in the interest of maximally
defanging the Lucas argument, I suggest that we should concede
that we might be able to determine what system we are, and look
elsewhere for the problem.

Given that we could know what system we are, doesn't that demonstrate
a contradiction, as above, in that it implies that we could know
the truth of our own Godel sentence?  No, it doesn't, and that's
because the knowledge comes from *external* means (such as inspection
of the brain).  The truth of the Godel sentence G for M is equivalent
to the truth of "M could not demonstrate G by *internal* means" (i.e.,
more or less by sitting there and thinking about math under its
own steam.  That M could come to know the truth of G by external
means is no contradiction at all.  Any smart enough computational
system (that was consistent and believed it was consistent) could come
to do exactly the same thing, if confronted by perceptual evidence
of its own design (or indeed, if told by an ever-reliable God figure
that G was true).  There's not the slightest contradiction there.
(Daryl McCullough will probably jump on me now, but I'm ready for him.)

Of course, you might try to use Godelian methods to come up with
a sentence G' that is equivalent to "M could not come to know G'
even by *external* means", thus finding a contradiction here.
Good luck to you.

I think that this analysis succeeds in defanging the Lucas argument
completely, showing that it has no strong consequences for AI or
cognitive science whatsoever, over and above the obvious limitation
that if we're machines, we can't prove everything.  OK, it also
establishes, under certain assumptions, that we can't determine
what system we are by pure introspection, but that's not much of
a surprise.

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."


