From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!ncar!noao!amethyst!organpipe.uug.arizona.edu!NSMA.AriZonA.EdU!bill Tue May 12 15:48:56 EDT 1992
Article 5400 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!ncar!noao!amethyst!organpipe.uug.arizona.edu!NSMA.AriZonA.EdU!bill
>From: bill@NSMA.AriZonA.EdU (Bill Skaggs)
Newsgroups: comp.ai.philosophy
Subject: Some empirical data relevant to Searle's argument
Summary: There's an interesting preprint available . . .
Message-ID: <1992May4.172948.19454@organpipe.uug.arizona.edu>
Date: 4 May 92 17:29:48 GMT
Sender: news@organpipe.uug.arizona.edu
Reply-To: bill@NSMA.AriZonA.EdU (Bill Skaggs)
Organization: Center for Neural Systems, Memory, and Aging
Lines: 57


  Searle's "Chinese Room" argument contains some superficial
flaws that allow it to be easily refuted on a superficial
level, but, as Jeff Dalton sees, the standard refutation
(the Systems Reply) does not get at the deepest essence of
the argument.

  The real power of the Chinese Room lies in the "argument
from intentionality".  To Searle, and to Jeff Dalton, and
to common sense, it is just obvious that humans believe
things and perceive things, and that we know what it is
that we believe and perceive.  The Chinese Room is an
attempt to show that computers, even if they behave
correctly and implement the right sorts of programs, 
cannot have this kind of intentionality.  I don't think
the Chinese Room in itself is all that compelling, but I
do think that Searle's conclusion is correct:  computers
cannot possess strong intentionality (which requires
infallibly knowing what it is that you believe and
perceive).

  As Daryl McCullough has responded, this argument is
incomplete.  To show that computers cannot match humans,
it is not enough to show that computers lack something;
it is also necessary to show that humans actually possess
it.  To Jeff Dalton this response seems lame and almost
dishonest:  it is "just obvious" that humans know what 
they believe, perceive, and understand.

  My reason for making this post is to mention a newly
available preprint containing some empirical evidence that
may, if nothing else, at least remove some of the obviousness
of human intentionality.  The paper is called "How we know
our minds:  the illusion of first-person knowledge of
intentionality", by Alison Gopnik.  To very briefly
summarize, the paper (a preprint for Behavioral and Brain
Sciences) lays out and discusses evidence that very young
children are capable of having beliefs but do not know what
it is that they believe.

  I would be interested in knowing how other people react
to her argument.  The paper is available for anonymous ftp.
Use the following procedure:  
	ftp princeton.edu	(or 128.112.128.1)
	login: anonymous
	password:  <user>@<hostname>	(make sure to include "@")
	cd pub/harnad
	get bbs.gopnik
	bye

  I stuck a few LaTeX formatting commands into my copy, so that
I could print it out more readably.  If anybody who has LaTeX
is interested, I'll be happy to Email the modified version.
  
	-- Bill

  


