From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!olivea!uunet!tdatirv!sarima Tue May 12 15:49:29 EDT 1992
Article 5459 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!sun-barr!olivea!uunet!tdatirv!sarima
>From: sarima@tdatirv.UUCP (Stanley Friesen)
Newsgroups: comp.ai.philosophy
Subject: Re: Some empirical data relevant to Searle's argument
Message-ID: <11@tdatirv.UUCP>
Date: 6 May 92 19:11:17 GMT
References: <1992May4.172948.19454@organpipe.uug.arizona.edu>
Reply-To: sarima@tdatirv.UUCP (Stanley Friesen)
Organization: Teradata Corp., Irvine
Lines: 58

In article <1992May4.172948.19454@organpipe.uug.arizona.edu> bill@NSMA.AriZonA.EdU (Bill Skaggs) writes:
|
|  The real power of the Chinese Room lies in the "argument
|from intentionality".  To Searle, and to Jeff Dalton, and
|to common sense, it is just obvious that humans believe
|things and perceive things, and that we know what it is
|that we believe and perceive.

True enough, as far as humans are concerned.  But how do we objectively
recognize if some other entity believes/percieves and knows it?
What is the identification criterion for this?

| The Chinese Room is an
|attempt to show that computers, even if they behave
|correctly and implement the right sorts of programs, 
|cannot have this kind of intentionality.

And it shows nothing of the kind.  It only shows that 'intuition' and
'common sense' cannot answer the question usefully for non-human entities.

| but I
|do think that Searle's conclusion is correct:  computers
|cannot possess strong intentionality (which requires
|infallibly knowing what it is that you believe and
|perceive).

Really?  Since when are humans infallible at *anything*?
Even at knowing ourselves we can fail in perfect knowledge.  We often
have vast blind spots when it comes to our own motivations and
perceptions.

|  As Daryl McCullough has responded, this argument is
|incomplete.  To show that computers cannot match humans,
|it is not enough to show that computers lack something;
|it is also necessary to show that humans actually possess
|it.  To Jeff Dalton this response seems lame and almost
|dishonest:  it is "just obvious" that humans know what 
|they believe, perceive, and understand.

Yes, I agree that it is obvious that humans know this (to a large degree),
but what is *not* obvious is that the *way* they do this is non-computable.

If humans' self-knowledge is accomplished in a computable way, then
computers are just as capable of it as humans are.

|  The paper is called "How we know
|our minds:  the illusion of first-person knowledge of
|intentionality", by Alison Gopnik.  To very briefly
|summarize, the paper (a preprint for Behavioral and Brain
|Sciences) lays out and discusses evidence that very young
|children are capable of having beliefs but do not know what
|it is that they believe.

I would suspect this is even possible for (at least some) adults.

-- 
---------------
uunet!tdatirv!sarima				(Stanley Friesen)


