From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!uunet!mcsun!uknet!edcastle!aiai!jeff Thu Jan  9 10:34:15 EST 1992
Article 2570 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Searle and the Chinese Room
Message-ID: <5913@skye.ed.ac.uk>
Date: 8 Jan 92 21:40:10 GMT
References: <1991Dec5.210724.12480@cs.yale.edu> <1991Dec8.192843.6951@psych.toronto.edu> <1991Dec11.170157.27053@cs.yale.edu> <1991Dec11.203452.9419@psych.toronto.edu> <317@tdatirv.UUCP>
Reply-To: jeff@aiai.UUCP (Jeff Dalton)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 17

In article <317@tdatirv.UUCP> sarima@tdatirv.UUCP (Stanley Friesen) writes:
>Hardly.  When I say I understand a word that usually simply means I do
>not have to *consciously* think about its significance.

How does this fit with McDermott's suggestion that

   In each case the language-using system observes the user of a
   symbol system and comments on a semantic interpretation of the
   symbols.  In one case the system observed is the same as the
   observer; in the other case it's the Zulus.

>A virtual person is the set of behaviors, memories and attitudes that
>are instantiated by the operation of an algorithm in my brain, but which
>do not alter my own memories or attitudes.

Is that really suppose to be all there is to a person, a set of
behaviors, memories, and attitudes?


