From newshub.ccs.yorku.ca!torn!cs.utexas.edu!sun-barr!ames!haven.umd.edu!darwin.sura.net!zaphod.mps.ohio-state.edu!rpi!utcsri!utgpu!pindor Thu Oct  8 10:11:01 EDT 1992
Article 7095 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!sun-barr!ames!haven.umd.edu!darwin.sura.net!zaphod.mps.ohio-state.edu!rpi!utcsri!utgpu!pindor
>From: pindor@gpu.utcs.utoronto.ca (Andrzej Pindor)
Subject: Re: Brain and Mind (was: Logic and God)
Message-ID: <BvI81J.92B@gpu.utcs.utoronto.ca>
Organization: UTCS Public Access
References: <1992Sep24.000850.6734@hilbert.cyprs.rain.com> <1992Sep28.164828.2122@meteor.wisc.edu> <1992Sep30.205233.662@hilbert.cyprs.rain.com> <1992Oct1.210056.13084@meteor.wisc.edu>
Date: Fri, 2 Oct 1992 17:17:41 GMT
Lines: 63

In article <1992Oct1.210056.13084@meteor.wisc.edu> tobis@meteor.wisc.edu (Michael Tobis) writes:
>In article <1992Sep30.205233.662@hilbert.cyprs.rain.com> max@hilbert.cyprs.rain.com (Max Webb) writes:
>
>>In Searle's argument, he forces one domain (chinese) to use one
>>representation different from any other representation in use, and
>>explicitly refuses to provide the maps between related domains. Thus, he
>>prevents statements in chinese from having any relationship to any other
>>statements the human entertains.  At the _system_ level only, - rules+human,
>>each chinese statement is embedded in a network of relations with the
>>other chinese statements, and it is at this level where understanding
>>occurs _IF ANY_. (See???!? I am not assuming that if the behavior is lifelike,
>>the system is alive - as you state above - I am simply showing that there
>>is more than level at which understanding could possibly be occurring.)
>
>Well, I am with you until the very last phrase. The CR argument presumes
>"understanding" to be an experiential, rather than functional phenomenon.
>If any understanding occurs in the sense in which most people understand
>"understanding" then an experience must occur. It is only in that sense
>that the Chinese Room argument has meaning, because it demonstrates that
>if algorithmic natural language is possible, it follows that natural
>language without experiential understanding is possible.
>
A trouble with 'experiential understanding' is that it is a purely subjective
phenomenon - you can only be sure you have it yourself. We extrapolate its
existence to other people by analogy - they are like us (note the trouble
people had, and some still do, with accepting as human beings those looking
differently - like having different skin colour). Conseqently there is no way
of determining that something non-human has 'experiential understanding',
do you agree?
This is the trap Searle and his followers fall in - for anything to have 
understanding it has to contain somewhere a human which understands, by
definition what understanding (experiential) is - something humans have.
Note your statement:"If any understanding occurs....then an experience must 
occur". You of course mean 'human experience', because it is the only one
which we know that exists, right? How would you know whether 'experience
occurs' or not other then having or not having a human experiencing?

.............
>Do I kill an intelligent being by getting bored and deciding not to follow
>the rules?
>
If the physico-chemical processes in your brain stopped, you would be dead,
do you agree? Why is the above so ridiculous then?

>Summarizing: It seems to me that you are proposing that the human + algorithm 
>has an experience separate from that of the human alone, which I find
>an extremely dubious proposition. Either that, or you are using 

Why? Could you give a reason? What do you know about 'experience' of anything
else but humans? Accidently, can a group of humans (say society) have
a collective experience (culture?)? If these people then decide to disperse, 
the experiencing entity would be killed, wouldn't it?
.........

>mt
>

Andrzej Pindor
-- 
Andrzej Pindor
University of Toronto
Computing Services
pindor@gpu.utcs.utoronto.ca


