From newshub.ccs.yorku.ca!torn!utcsri!rpi!zaphod.mps.ohio-state.edu!darwin.sura.net!haven.umd.edu!uunet!sun-barr!west.West.Sun.COM!cronkite.Central.Sun.COM!texsun!exucom.exu.ericsson.se!pc254185.exu.ericsson.se!exukjb Wed Sep 23 16:54:46 EDT 1992
Article 7010 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca sci.skeptic:20378 comp.ai.philosophy:7010
Path: newshub.ccs.yorku.ca!torn!utcsri!rpi!zaphod.mps.ohio-state.edu!darwin.sura.net!haven.umd.edu!uunet!sun-barr!west.West.Sun.COM!cronkite.Central.Sun.COM!texsun!exucom.exu.ericsson.se!pc254185.exu.ericsson.se!exukjb
>From: exukjb@exu.ericsson.se (ken bell)
Newsgroups: sci.skeptic,comp.ai.philosophy
Subject: Re: Brain and Mind
Message-ID: <exukjb.221.717191278@exu.ericsson.se>
Date: 22 Sep 92 19:47:58 GMT
References: <1992Sep13.194856.21976@meteor.wisc.edu> <1992Sep22.043249.4954@meteor.wisc.edu>
Sender: news@exu.ericsson.se
Organization: Ericsson Network Systems, Inc.
Lines: 358
Nntp-Posting-Host: pc254185.exu.ericsson.se
X-Disclaimer: This article was posted by a user at Ericsson Network Systems
              The opinions expressed are strictly those of the user and
              not necessarily those of Ericsson Network Systems.

In article <1992Sep22.043249.4954@meteor.wisc.edu> tobis@meteor.wisc.edu (Michael Tobis) writes:
>From: tobis@meteor.wisc.edu (Michael Tobis)
>Subject: Re: Brain and Mind
>Date: 22 Sep 92 04:32:49 GMT

>Being new to this newsgroup, and probably not as well read on the topic
>as some of you, I presume I am covering ground many of you have seen before.
>Still, perhaps you may be interested in some of my points on the mind/body
>problem, or perhaps in some of Max Webb's rebuttals.

>I am posting some email Max and I have had on the subject. For
>context, there is this paragraph, which appeared in a posting of mine
>on sci.skeptic:

>> > An otherwise apalling piece of new age drivel called "The Holotropic Mind"
>> > (I heartily recommend against wasting your time on it) nevertheless makes
>> > an interesting analogy. Images appear on your TV screen, and if your TV is
>> > damaged, the pictures are distorted.  Yet no one claims that the images are
>> > caused by the TV. 

>The point is that though the brain is obviously involved in consciousness
>in some way, this does not demonstrate that consciousness is caused by
>the physical structure of the brain.

>It is widely believed that dualism is the province of superstition and
>backwardness, but I am convinced that such an assertion is profoundly
>premature at best, and probably just wrong. I will be interested in
>any comments on the following discussion.

>The earliest letter I still have is this one, from Max. Quoted (>)
>comments are mine.

>===

> Subject: Re: Brain and Mind (was: Logic and God)
> 
>   > I had best take a break from this and get some work done. Thanks for your
>   > interesting response.
> 
> Thanks.
> 
>   > Could you please clarify what you mean by cartesian theater? And what
>   > you mean by a "sytems reply" to Searle's argument?
> 
> Tsk, tsk. You quoted Dennett's book on dualism, but you hadn't actually
> read it! For shame :)
> 
> The "cartesian theatre" is exactly what you illustrate with your
> TV (brain) and mind (viewer). It is the idea that there is a central
> self, which receives perceptions and issues commands in a definite order.
> It is the idea that there is a line drawn somewhere - when information
> crosses that line, it is in our awareness. There may be psychological
> reasons for us preferring this idea that have nothing to do with evidence.
> 
> The experimentally verified fact of the matter is that it
> is much more complex than that. I cannot do Dennett's idea justice
> in a short note, why not just read the book? If you cannot get hold of a copy
> write, and I will type in something in more detail.
> 
>   > I will eventually make an effort to explain why I think a dualist position is
>   > not a homunculus or phlogiston type argument, but I think I must leave it
>   > where it lies for a few days.
> 
> Oh yeah, the "systems reply" to searles argument is that the room (person
> + rules) understands chinese at one level of abstraction, but the person
> by himself does not.
> 
> Consider this version of searles argument. Take a hottentot (someone
> whose education does not include integers  3 or arithmetic) and put him
> in a room, along with rules for binary 2's complement arithmetic.
> Every so often, we push a sandwich and a slip of paper holding
> an operation name (* - + /) and 2 binary integers [represented as series of
> dots and dashes] under the door. He eats the sandwich, follows the mechanical 
> rules, derives a new series of dots and dashes, and shoves it under the
> door the other way. He has no idea what these dot's and dashes mean, but
> it will get him another sandwich.
> 
> Does he have the rule set for performing multiplication down? Yes,
> certainly. Can the room, (fred + rules) perform multiplication given
> the protocol we have set up? Certainly.
> 
> Does Fred know how to multiply the numbers he knows? No! Let's ask him:
> 
> Us: "Hey Fred! what is 3 * 2 * 2 * 3 * 1?" (recall that hottentotts only
> have names for the first 3 natural numbers.
> 
> Fred: "Beats the hell out of me. Many?"
> 
> Do we conclude that performing multiplication cannot be a matter of rules?
> Hardly. We conclude that Fred doesn't have the mappings from this new
> domain to his previous one, and back again. Give them to him, and Fred
> will TRULY begin to understand multiplication. The two layers of abstraction
> collapse, and Freds mechanical knowledge becomes truly internalized.
> 
> Fred tries again: "Hmmm 3 = --, 2 = -., 2 = -., 3 = --, 1 = .-. OHHH!
> the answer is [uh -..-..] which is (pause) I don't have a word for that,
> but it is this many" (he whacks the door 36 times)
> 
> Searle makes an argument of exactly this form, and would conclude that
> the 'meaning' of multiplication cannot be a rule, and that 'true
> multiplication' cannot be implemented by rules. In reality, meaning is
> a function of the knowledges _CONTEXT_ - a bit of knowledge is understood
> precisely by integrating with the REST of your knowledge.
> 
> Looking forward to your reply.
> 
>        Max G. Webb

>====

>I reply: (Max = >) (mt = > >)

>===
>> >Could you please clarify what you mean by cartesian theater? And what
>> >you mean by a "sytems reply" to Searle's argument?
>> 
>> Tsk, tsk. You quoted Dennett's book on dualism, but you hadn't actually
>> read it! For shame :)

>I have read about a third of it. It's heavy going, and not my highest
>priority. It is, however, delightful. And, IMHO, completely wrong on
>this point.

>> The "cartesian theatre" is exactly what you illustrate with your
>> TV (brain) and mind (viewer). It is the idea that there is a central
>> self, which receives perceptions and issues commands in a definite order.
>> It is the idea that there is a line drawn somewhere - when information
>> crosses that line, it is in our awareness. There may be psychological
>> reasons for us preferring this idea that have nothing to do with evidence.

>There is evidence and there is evidence. Metaphysically, I know I am
>conscious, but within the rules of science, I cannot prove it. The line,
>odd though it is, is central to my existence in trying to be both a
>scientist and a human. It seems to me that denying that there is a problem
>is an ontological fallacy, though not a scientific one. (At least, if we
>construe science to be an extension of physics.)

>You somewhat skew the point of the TV analogy, which didn't really include 
>a "viewer", and in that it was not identical to the theater analogy. In fact,
>the TV was analogous to the brain, and the program to the mind. The point was
>that although no picture can exist without the TV, the cause of the image
>was the broadcast. Similarly, although I cannnot (presumably) experience
>anything without a brain, this does not prove that the brain is sufficient
>for experience to exist. Since there are profound difficulties in reducing
>experience to an objectively observeable phenomenon in a physical sense,
>I tentatively conclude that physics is not complete, although (!!) there
>may be psychological reasons for believing otherwise.

>> Oh yeah, the "systems reply" to searles argument is that the room (person
>> + rules) understands chinese at one level of abstraction, but the person
>> by himself does not.  (Hottentot example)

>Mathematics is not a good example, as mathematics can be done by application
>of rules without understanding. (Have you tutored anyone in first semester
>calculus lately? Most beginnners can apply nx^(n-1) without any concept
>of what a derivative is, and so can Mathematica. This is the same category
>error that confuses dreaming with REM cycles.)

>Searle's argument is not logical or physical. As a logical or physical
>argument it is devoid of content. As a metalogical or metaphysical
>argument it is compelling, but only  because I am a metalogical entity,
>and understand "understanding" in a way that is deeper than merely
>functional, and not reducible to function.

>I think psychology must proceed from consciousness as an axiom. Like many
>other axiomatic systems, it will produce interesting results, while
>leaving open the question of "truth". Neither widely practiced strategy
>seems to me likely to lead to a reasonable model: neither denying
>the relevance of consciousness nor assuming that it is a physical
>phenomenon strikes me as remotely plausibly leading to a useful psychology.

>If human consciousness is taken as axiomatic, practical questions will arise
>when someone has the nerve to call an AI system conscious. They will
>never convince me, as the moral implications of a conscious artificial
>system are so enormous, and the demonstration so vague. A Turing test is
>a fine goal for a model of consciousness, but it is no proof that the
>model is identical with the thing modelled.

>Thanks for pointing out comp.ai.philosophy to me. We ought to add it to
>the distribution. I expect you won't mind me posting this note, but since
>it results from email, I will follow protocol and await your permission.

>regards
>mt

>===

>Max replies :

>===

>>> The "cartesian theatre" is ...
>>> is the idea that there is a line drawn somewhere - when information
>>> crosses that line, it is in our awareness.

>>There is evidence and there is evidence. Metaphysically, I know I am
>>conscious, but within the rules of science, I cannot prove it.

>you appear to have me confused with someone who denies the existence
>of consciousness.

>> The line,
>>odd though it is, is central to my existence in trying to be both a
>>scientist and a human.

>I'd like to explore this. I don't believe in the line, nor do I feign
>anesthesia; I feel myself fully human, and I even have experiences that
>some would interpret as religious. Maybe the consequences of the lack
>of a sharp line aren't as hateful as you think.

>>Mathematics is not a good example, as mathematics can be done by application
>>of rules without understanding.

>Nevertheless, it can be easily extended to the full natural language
>example; Make it chinese, and provide the hottentot with inter-domain
>maps from natural language to visual, tactile, and existing concepts
>Again, the conclusion follows - a domain has been split in two in
>searles argument by a representational barrier. Breach this barrier
>and we again have a unified understanding (though partially rote)
>which will over a long time develop into a more natural language
>competence (as mnemonists know)..

>>.. This is the same category
>>error that confuses dreaming with REM cycles.)

>I don't think you will find that philosophers consider the phrase
>'category error' as damning as they used to. We know now heat
>reduces to molecular kinetic energy; yet you can't substitute
>'molecular kinetic energy' for 'heat' in "he basked in the suns heat"
>At the most, I think you can claim the rules base is more clumsily
>encoded.

>>Searle's argument is not logical or physical. As a logical or physical
>>argument it is devoid of content. As a metalogical or metaphysical
>>argument it is compelling, but only  because I am a metalogical entity,

>Sorry, I am unfamiliar with the word 'metalogical'. Care to define it?
>I also don't know how you are using 'metaphysical'.
>Could you deal more explicitly with the systems reply in your post?.

>>and understand "understanding" in a way that is deeper than merely
>>functional, and not reducible to function.

>We have different intuitions.

>>I think psychology must proceed from consciousness as an axiom. Like many
>>other axiomatic systems, it will produce interesting results, while
>>leaving open the question of "truth".

>How do you propose we 'take consciousness as an axiom?' Exactly
>how does this constrain the class of models proposed? What predictions
>does it make?  Every prediction I have ever seen from dualism that was
>testable has failed. What results are you hoping for that we can't
>achieve now?

>> Neither widely practiced strategy
>>seems to me likely to lead to a reasonable model: neither denying
>>the relevance of consciousness nor assuming that it is a physical
>>phenomenon strikes me as remotely plausibly leading to a useful psychology.

>The riddle for you is "How can caspar the ghost go through the wall AND
>pick up a towel at the same time?"

>>If human consciousness is taken as axiomatic, practical questions will arise
>>when someone has the nerve to call an AI system conscious. They will
>>never convince me, as the moral implications of a conscious artificial
>>system are so enormous, and the demonstration so vague. A Turing test is
>>a fine goal for a model of consciousness, but it is no proof that the
>>model is identical with the thing modelled.

>If we do create another kind of life, I consider us morally bound to
>make the attempt to determine if it is conscious in some way like us.
>(We have to determine whether we recognise them as having rights).
>I further consider us morally bound to apply criteria that we ourselves
>could pass, without appealing to irrelevant features like appearance.

>>Thanks for pointing out comp.ai.philosophy to me. We ought to add it to
>>the distribution. I expect you won't mind me posting this note, but since
>>it results from email, I will follow protocol and await your permission.

>>regards
>>mt

>Go ahead and post it, I'll work on my reply. You might include the
>full Hottentot analogy .

>===

>I replied

>===

>I'll try to reconstruct our conversation and post it to comp.ai.philosophy
>soon. A couple of quick points in the interim:

>>>Mathematics is not a good example, as mathematics can be done by application
>> >of rules without understanding.
> 
>> Nevertheless, it can be easily extended to the full natural language
>> example.

>Granted, but that is exactly the point of the Chinese room. A valid response
>is not proof of understanding. A purely functional definition of 
>what it means to understand misses the essence of the problem. There is
>no paradox until you are willing to accept that there is a difference between
>a functional and an experiential definition of understanding.
> 
>> The riddle for you is "How can caspar the ghost go through the wall AND
>> pick up a towel at the same time?"

>A nice analogy, though you have misspelled Casper's name. But as I see it
>we each have a riddle: I, how Casper, who can go through a wall, can pick up
>a towel, and you, how Casper, who can pick up a towel, can go through a wall!
> 
>> If we do create another kind of life, I consider us morally bound to
>> make the attempt to determine if it is conscious in some way like us.
>> (We have to determine whether we recognise them as having rights).
>> I further consider us morally bound to apply criteria that we ourselves
>> could pass, without appealing to irrelevant features like appearance.

>That's what takes this discussion beyond an amusing intellectual exercise
>into eminently practical domains. I think error in either case is apalling:
>failing to grant rights to a conscious entity would be awful, but granting
>rights to a nonentity seems to me far more dangerous. Do you really fancy
>seeing much of the world's silicon running dressed-up versions of
>        main(){printf("I vote for Lyndon Larouche.\n");}
>??

>I doubt that there's any test of consciousness that only all conscious beings
>could pass. Unless and until you can come up with such a test, I think we
>have to assume that all of our overblown toasters are just toasters, and
>all of the people are conscious. Neither of these assumptions are certain to
>be perfectly correct, but any alternative is pragmatically speaking
>enormously dangerous. And I see no evidence that such a test is possible.

>mt


How about the test of a communication? I have long mused that the power, 
and perhaps also the need, to communicate is the great wonder and the 
central meaning of intelligence. 

The real difficulty about ascriptions of consciousness is that our modes
of speech contains a conceptual structure which sets ultimate categorial 
limits for the application if specific concepts in it. Some stretching
is possible, but only within limits. It is because a tree doesn't [or the 
trees we know don't] exhibit activity in pursuit of self-selected goals of 
the appropriate kind that we would not ascribe consciousness to it. The same 
for more plausible candidates for such ascriptions, like certain robots and 
androids [the one's we know or imagine]. To suppose that merely because they 
can store and retrieve information-bits they are more like conscious 
entities than anything else we can think of, would be analogous to thinking 
that because the phone book contains the addresses of everybody in LA, this 
tends to show that the phone directory "knows" where each person in LA lives.

//////////////////////////////////////
/* Kenny  *   Welcome to Mind Wars! */
//////////////////////////////////////


