From newshub.ccs.yorku.ca!torn!cs.utexas.edu!sun-barr!ames!agate!stanford.edu!CSD-NewsHost.Stanford.EDU!t.Stanford.EDU!ginsberg Wed Oct 14 14:58:40 EDT 1992
Article 7213 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai:4736 comp.ai.neural-nets:4666 comp.ai.philosophy:7213 sci.psychology:4801
Newsgroups: comp.ai,comp.ai.neural-nets,comp.ai.philosophy,sci.psychology
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!sun-barr!ames!agate!stanford.edu!CSD-NewsHost.Stanford.EDU!t.Stanford.EDU!ginsberg
>From: ginsberg@t.Stanford.EDU (Matthew L. Ginsberg)
Subject: Re: Human intelligence vs. Machine intelligence
Message-ID: <1992Oct11.200006.685@CSD-NewsHost.Stanford.EDU>
Keywords: penrose, church-turing hypothesis
Sender: news@CSD-NewsHost.Stanford.EDU
Organization: Computer Science Department, Stanford University
References: <MOFFAT.92Oct7105034@uvapsy.psy.uva.nl> <1992Oct7.151533.7822@CSD-NewsHost.Stanford.EDU> <BvytMD.9FC@cs.bham.ac.uk>
Date: Sun, 11 Oct 1992 20:00:06 GMT
Lines: 36


I don't want to get involved in this, really I don't.  Let me only
state what I think the argument is about:

By "strong AI" is meant, I believe, the view that an algorithm can
respond to stimulus in a way exhibiting intelligence.  Strong AI is
important because -- among other things -- it provides the fundamental
justification for the continued support of our field.

Penrose says that the strong AI thesis is wrong.  His argument, very
briefly, is that Godel's theorem is the linchpin in a proof that the
behavior of mathematicians cannot be duplicated by algorithmic
methods.  I believe that Penrose's argument here is technically
flawed, but this editor buffer is too small for me to refute it here.
: )

Sloman says that the strong AI thesis is wrong because "behaviour
alone is not a sufficient basis for attributing intelligence" [his AIJ
review, p.365].  He then goes on to raise a variety of arguments
against Penrose, having sold out on the fundamental question of strong
AI's validity.  And Sloman's claim about behavior ignores probably the
most profound lesson of twentieth century science: Claims are only
meaningful to the extent that they can be tested.  This observation
underlies advances in the philosophy of science, the development of
quantum mechanics, and the development of theories of computation.
And what it tells us is that behavior alone is a sufficient basis
for attributing anything whatsoever.

We in AI cannot afford to relinquish the strong AI thesis, and at
this point, we have no substantive reason for doing so.  It is a shame
that the only reply to Penrose that has appeared in our premier journal
does just this.

						Matt Ginsberg




