From newshub.ccs.yorku.ca!torn!utcsri!rutgers!uwvax!meteor!tobis Thu Oct  8 10:11:02 EDT 1992
Article 7097 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!utcsri!rutgers!uwvax!meteor!tobis
>From: tobis@meteor.wisc.edu (Michael Tobis)
Newsgroups: comp.ai.philosophy
Subject: Re: Brain and Mind (was: Logic and God)
Message-ID: <1992Oct2.185539.2953@meteor.wisc.edu>
Date: 2 Oct 92 18:55:39 GMT
References: <1992Sep30.205233.662@hilbert.cyprs.rain.com> <1992Oct1.210056.13084@meteor.wisc.edu> <BvI81J.92B@gpu.utcs.utoronto.ca>
Organization: University of Wisconsin, Meteorology and Space Science
Lines: 89

In article <BvI81J.92B@gpu.utcs.utoronto.ca> pindor@gpu.utcs.utoronto.ca (Andrzej Pindor) writes:
>In article <1992Oct1.210056.13084@meteor.wisc.edu> tobis@meteor.wisc.edu (Michael Tobis) writes:
>>In article <1992Sep30.205233.662@hilbert.cyprs.rain.com> max@hilbert.cyprs.rain.com (Max Webb) writes:

>>The CR argument presumes
>>"understanding" to be an experiential, rather than functional phenomenon.
>>If any understanding occurs in the sense in which most people understand
>>"understanding" then an experience must occur. It is only in that sense
>>that the Chinese Room argument has meaning, because it demonstrates that
>>if algorithmic natural language is possible, it follows that natural
>>language without experiential understanding is possible.
>>
>A trouble with 'experiential understanding' is that it is a purely subjective
>phenomenon - you can only be sure you have it yourself. We extrapolate its
>existence to other people by analogy - they are like us (note the trouble
>people had, and some still do, with accepting as human beings those looking
>differently - like having different skin colour). Conseqently there is no way
>of determining that something non-human has 'experiential understanding',
>do you agree?

Yes. This is precisely the problem.

>This is the trap Searle and his followers fall in - for anything to have 
>understanding it has to contain somewhere a human which understands, by
>definition what understanding (experiential) is - something humans have.
>Note your statement:"If any understanding occurs....then an experience must 
>occur". You of course mean 'human experience', because it is the only one
>which we know that exists, right?

No, I can't go along with that. Certainly I believe animals to have experience.
Perhaps artificial constructs have experience as well. I doubt that we will
ever achieve sufficient certainty to justify assigning rights to such
constructs though.

> How would you know whether 'experience
>occurs' or not other then having or not having a human experiencing?

Of course, one has certainty only in one's own case. One believes in the
phenomenological existence of others by analogy. It is entirely unclear
whether we can take the analogy to the point of believing in phenomenological
existence of implementations of algorithms. 

Your question applies equally to those who would assume that AI is 
phenomenologically aware as to those like myself who would assume that
it is not.

>>Do I kill an intelligent being by getting bored and deciding not to follow
>>the rules?

>If the physico-chemical processes in your brain stopped, you would be dead,
>do you agree? Why is the above so ridiculous then?

I find it hard
to believe that you think a human is a single entity and a human who has
decided to follow rules he doesn't understand is two. What if a page of
the rules is substituted by a page which is incorrect? Does the "entity"
"die" when the pages are swapped, or only when I attempt to implement
rules which should be on that page?

The trouble is, ANY approach to the existence of experience seems to lead
to absurdities. All I really advocate is humility (in the face of
what I perceive as astonishing hubris in the AI community.) 

I repeat, and it looks like I will be making a habit of repeating:
that intelligence necessitates experience is an assumption, and a
not entirely satisfactory one. It is in no way demonstrated, just because
you find it appealing. In fact, it is probably impossible to demonstrate.

>Summarizing: It seems to me that you are proposing that the human + algorithm 
>>has an experience separate from that of the human alone, which I find
>>an extremely dubious proposition. 

>Why? Could you give a reason? 

No, but neither can you. I have my hunches and you have yours, but it is
precisely my point that neither hunch is testable because experience is
not verifiable in an objective way. I have my experience, and cannot prove
it to you; you presumably have the same situation. We believe each other to
be conscious entities because of our intuition, not because of anything
that can be called objective evidence.

Now, it is proposed that successful implementation of a system whose
_design objective_ is to _pass our intuitive tests_, (dressed up with Turing's
name to give it a certain official credibility) is indeed conscious. That
is to say, the assumption is that our intuition is infallible.

That's an apallingly flimsy premise to base the entire future of evolution on.

mt


