From newshub.ccs.yorku.ca!torn!utcsri!rpi!zaphod.mps.ohio-state.edu!darwin.sura.net!sgiblab!news.kpc.com!decwrl!atha!aupair.cs.athabascau.ca!burt Wed Oct 14 14:58:23 EDT 1992
Article 7186 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai:4708 comp.ai.neural-nets:4640 comp.ai.philosophy:7186 sci.psychology:4782
Path: newshub.ccs.yorku.ca!torn!utcsri!rpi!zaphod.mps.ohio-state.edu!darwin.sura.net!sgiblab!news.kpc.com!decwrl!atha!aupair.cs.athabascau.ca!burt
>From: burt@aupair.cs.athabascau.ca (Burt Voorhees)
Newsgroups: comp.ai,comp.ai.neural-nets,comp.ai.philosophy,sci.psychology
Subject: Re: Human intelligence vs. Machine intelligence
Message-ID: <burt.718665726@aupair.cs.athabascau.ca>
Date: 9 Oct 92 21:22:06 GMT
References: <26 <burt.718398109@aupair.cs.athabascau.ca> <26608@castle.ed.ac.uk
Sender: news@cs.athabascau.ca
Followup-To: comp.ai
Lines: 12


  I find it interesting tht people who support what is called "strong AI"
are now trying to get out of the Godel argument against this by saying
that Godel's theorems only apply to consistent systems and it is clear
that any machine which matches human capacities would have to be running
on some sort of inconsistent system.
  As I have underwstood it, the strong AI position is that it is possible
to create intelligent behavior on sequential machines running formal
programs.  Last I heard formal programs were required to be based on
consistent formal systems.
  Seems to me there's a bit of a contradiction there.  But then, allowing
inconsistency, I guess that's okay.


