From newshub.ccs.yorku.ca!torn!cs.utexas.edu!sun-barr!decwrl!decwrl!atha!aupair.cs.athabascau.ca!burt Wed Oct 14 14:58:32 EDT 1992
Article 7202 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai:4728 comp.ai.neural-nets:4661 comp.ai.philosophy:7202 sci.psychology:4796
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!sun-barr!decwrl!decwrl!atha!aupair.cs.athabascau.ca!burt
>From: burt@aupair.cs.athabascau.ca (Burt Voorhees)
Newsgroups: comp.ai,comp.ai.neural-nets,comp.ai.philosophy,sci.psychology
Subject: Re: Human intelligence vs. Machine intelligence
Message-ID: <burt.718784794@aupair.cs.athabascau.ca>
Date: 11 Oct 92 06:26:34 GMT
References: <26 <burt.718398109@aupair.cs.athabascau.ca> <26608@castle.ed.ac.uk
     <burt.718665726@aupair.cs.athabascau.ca>
Sender: news@cs.athabascau.ca
Followup-To: comp.ai
Lines: 44

>     I find it interesting tht people who support what is called "strong AI"
>   are now trying to get out of the Godel argument against this by saying
>   that Godel's theorems only apply to consistent systems and it is clear
>   that any machine which matches human capacities would have to be running
>   on some sort of inconsistent system.

>I do not know what you are talking about. Do you have references
>to any books or articles that contain such an argument made by a
>supporter of strong AI?

  No, there were several postings sent to me e-mail by people in response
to an earlier posting of mine here.  Each said that, well, we get away from
Godel because we don't really need consistent systems since human
intelligense is clearly inconsistent anyway.
  On the other hand, I have a suspicion that fame and fortune awaits a person
who can stand Godel on his head and develop a mathematics and logic of
complete but inconsistent systems...

>If you are interested in a refutation of Lucas's G"odel argument,
>see [eg] R. Kirk's article in Synthese [1].

  But Lucas's argument is not the only one.  Penrose doesn't really
follow his Godel argument to it's obvious conclusion.  Namely, we have
a formal system.  Okay, it has a Godel proposition which we know is trud,
but which can't be proved within the system.  We generate a bigger formal
system which can prove this proposition.  Etc.  This gives an infinite
regress with the structure, mathematically, of an inverse limit system.
Using the inverse limit theorem we know that such a limit exists but
by the construction it must have different properties than any other
forml system in the sequence.  E.g., by construction it must be complete,
hence inconsistent.  The problem for machine intelligence is that it is
not possible to reach this limit in a finite number of steps (shades
of Zeno).  That is, at some point there must be an (inductive) jump
and some people (e.g., Searle and Penrose) doubt that this can be a
matter of simply running an algorithmic program.

>oz
>---
>[1] Roland Kirk
>    Mental Machinery and Godel
>    Synthese, 66, 437-452. 1986
>---
>It is harder to get lost in an imagined forest than in a real one,
>for the former assists the thinker furtively. - GOLEM lect. XLIII


