From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!yale.edu!jvnc.net!darwin.sura.net!europa.asd.contel.com!uunet!mcsun!uknet!edcastle!aiai!jeff Thu Jan  9 10:34:25 EST 1992
Article 2586 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:2586 sci.philosophy.tech:1769
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!yale.edu!jvnc.net!darwin.sura.net!europa.asd.contel.com!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy,sci.philosophy.tech
Subject: Re: Causes and Reasons
Message-ID: <5920@skye.ed.ac.uk>
Date: 8 Jan 92 23:07:53 GMT
References: <1991Dec24.014716.6901@husc3.harvard.edu> <1991Dec25.042628.18737@bronze.ucs.indiana.edu> <1991Dec25.015221.6911@husc3.harvard.edu> <1991Dec28.221923.17443@bronze.ucs.indiana.edu>
Reply-To: jeff@aiai.UUCP (Jeff Dalton)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 16

In article <1991Dec28.221923.17443@bronze.ucs.indiana.edu> chalmers@bronze.ucs.indiana.edu (David Chalmers) writes:
>So, given that there exists at least one human in a mental state (e.g.
>understanding): it follows that there exists a Turing machine such that
>any system that realizes that Turing machine (in the appropriate state)
>possesses that mental state.  This is precisely "strong AI" as
>characterized by Searle.

What if it works for only some states that are of interest?
Eg, not for understanding.

>Note that epistemelogical points are entirely irrelevant.  Neither
>supervenience nor "strong AI" makes any epistemic claim.

I don't understand this.  It now sounds as if string AI were an
existance claim (there's some program that's the right one) without
any claim that we can ever know what program it is.  


