From newshub.ccs.yorku.ca!torn!cs.utexas.edu!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!usenet.ucs.indiana.edu!bronze.ucs.indiana.edu!chalmers Wed Oct 14 14:59:01 EDT 1992
Article 7247 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai:4759 comp.ai.neural-nets:4695 comp.ai.philosophy:7247 sci.psychology:4827
Newsgroups: comp.ai,comp.ai.neural-nets,comp.ai.philosophy,sci.psychology
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!zaphod.mps.ohio-state.edu!sol.ctr.columbia.edu!usenet.ucs.indiana.edu!bronze.ucs.indiana.edu!chalmers
>From: chalmers@bronze.ucs.indiana.edu (David Chalmers)
Subject: Re: Human intelligence vs. Machine intelligence
Message-ID: <Bw1r05.Lz5@usenet.ucs.indiana.edu>
Sender: news@usenet.ucs.indiana.edu (USENET News System)
Nntp-Posting-Host: bronze.ucs.indiana.edu
Organization: Indiana University
References: <OZ.92Oct10164812@ursa.sis.yorku.ca> <Bvz218.5B6@usenet.ucs.indiana.edu> <OZ.92Oct12011443@ursa.sis.yorku.ca>
Date: Tue, 13 Oct 1992 06:21:41 GMT
Lines: 38

In article <OZ.92Oct12011443@ursa.sis.yorku.ca> oz@ursa.sis.yorku.ca (Ozan Yigit) writes:

>I don't think this sound-bite summary of yours does justice to his
>paper. For example, Kirk also argues that one is _not_ required to
>grant Lucas or any other bearer of G"odelian argument that anybody 
>at anytime instentiate ony _one_ formal system.

AI (in common incarnations) claims that there exist computational
objects such that implementations of these have at least our
mathematical competence.  That's all that's required for the
Lucas argument to run.  AI also often claims that there exist
computational objects such that implementations of these simulate
my behaviour.  That's more than what's required, but it's certainly
enough.

Whether we "instantiate" one or more formal systems is not the point
(we certain implement many).  What matters is whether there is one
that can simulate us.

>It also includes a
>reply to, what I believe to be the gist of your "misses the point"
>bit [elided, see <Bvz218.5B6@usenet.ucs.indiana.edu>] at page 448.

The objection there isn't quite the same as mine, but is is related.
His reply, in any case, seems to miss the point -- nothing in the
objection implies that the person's sentence-outputs will be theorems
of the formal system that simulates physics.  It's saying that given
such a formal system, one can readily extract the sentences from it.

In any case, my version didn't require that we can translate his actions
into sentences.  All we have to do is distinguish between two
prespecified kinds of action, a yes-action and a no-action.  The
sentences are given as *input*, so we can fix their form.

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."


