From newshub.ccs.yorku.ca!torn!cs.utexas.edu!sun-barr!olivea!spool.mu.edu!darwin.sura.net!zaphod.mps.ohio-state.edu!uakari.primate.wisc.edu!usenet.coe.montana.edu!news.u.washington.edu!ogicse!psgrain!percy!nosun!hilbert!max Thu Oct  8 10:10:53 EDT 1992
Article 7082 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!sun-barr!olivea!spool.mu.edu!darwin.sura.net!zaphod.mps.ohio-state.edu!uakari.primate.wisc.edu!usenet.coe.montana.edu!news.u.washington.edu!ogicse!psgrain!percy!nosun!hilbert!max
>From: max@hilbert.cyprs.rain.com (Max Webb)
Newsgroups: comp.ai.philosophy
Subject: Re: Brain and Mind (was: Logic and God)
Message-ID: <1992Sep30.205233.662@hilbert.cyprs.rain.com>
Date: 30 Sep 92 20:52:33 GMT
References: <1992Sep20.180454.4161@daffy.cs.wisc.edu> <1992Sep24.000850.6734@hilbert.cyprs.rain.com> <1992Sep28.164828.2122@meteor.wisc.edu>
Organization: Cypress Semiconductor Northwest, Beaverton Oregon
Lines: 125

In article <1992Sep28.164828.2122@meteor.wisc.edu> tobis@meteor.wisc.edu (Michael Tobis) writes:
>In article <1992Sep24.000850.6734@hilbert.cyprs.rain.com> max@hilbert.cyprs.rain.com (Max Webb) writes:
>>In article <1992Sep20.180454.4161@daffy.cs.wisc.edu> tobis@xrap3.ssec.wisc.edu (Michael Tobis) writes:
>Attacks on this [Searles] argument say that it is objectively empty. This is
>true and entirely misses the point.  ...Taking the monist
>assumption, you conclude that there is no difference between intelligent
>behavior and and consciousness...

You ignored my [systems] reply, and substituted your own, and responded to
it as if it was mine. [.ie I have never called it 'objectively empty']
I do not appreciate having words put in my mouth.  Please read for content,
and respond to what is written, ok?

>>If you still think Searles argument is convincing, please say so, and
>>we can air it out in public. I don't think it will stand up too well.
>
>I hereby say (again) that Searle's Chinese Room argument is compelling,
>and that arguments against it make a deep and irreperable category
>mistake. Searle points out that, granting that natural language algorithms
>exist, a non-Chinese speaking human executing the algorithm will 
>certainly give functionally correct responses in Chinese to inputs
>in Chinese. ... the human does not understand the conversation he is
>facilitating...

You have never even addressed (or even echoed) the substance of my reply...
So, allow me to go at it from another angle.

I subscribe to the idea of 'meaning holism', which is that the meaning of
a statement is simply it's relationship to all other statements in the mental
domain of interest. In an otherwise empty mind, 'E=mc^2' has no meaning.
This popular theory of meaning is usefully applied here.

In Searle's argument, he forces one domain (chinese) to use one
representation different from any other representation in use, and
explicitly refuses to provide the maps between related domains. Thus, he
prevents statements in chinese from having any relationship to any other
statements the human entertains.  At the _system_ level only, - rules+human,
each chinese statement is embedded in a network of relations with the
other chinese statements, and it is at this level where understanding
occurs _IF ANY_. (See???!? I am not assuming that if the behavior is lifelike,
the system is alive - as you state above - I am simply showing that there
is more than level at which understanding could possibly be occurring.)

To illustrate the idea that human+rules and human are separate individuals, we
can change the problem to playing chess, and set them at odds with each
other - the ability to have opposing intentions is one of the clearest
indications of separate personal identities. Simply have the human play
one game in his customary way, and interpret the other side. With a suitable
encoding and trickery, he will never even realize he is (in a sense) playing
both sides.

Now, if we unify the domains by providing the appropriate maps, they
collapse together, because the relationships between the statements in
chinese, and the rest of the human's preexisting mental domains can now
be derived - and human understanding follows immediately. It follows,
because the chinese statements are now embedded in a human mental domain.

WILL YOU PLEASE ADDRESS THE SYSTEMS REPLY! Your assumption is that the
'understanding' is either in the _human_ or _nowhere_. The systems
reply is that the 'understanding', _if anywhere_, is in the system:
rules + human. THIS is my answer, not this paraphrase you give. I see
no sign that you have heard me yet, and I am getting discouraged.

>>I know that I am conscious, and am
>>willing to take that as an axiom. I am not thereby transformed into
>>a substance dualist like yourself. 
>Well, I don't know what you mean by "substance".

'Substance' dualism, relies on there being some vague non-physical
plane of existence or substance in which 'souls' live. 'Property' dualism
only assumes that statements about 'minds' cannot be rewritten into
statements about physical facts, ie. not that ectoplasm exists, but
the reduction attempt fails. Dave Chalmers, for example, is a
'property' dualist. You need to read more philosophy.

>>>|> In article <1992Sep13.194856.21976@meteor.wisc.edu> tobis@meteor.wisc.edu (Michael Tobis) writes:
>>>I am NOT arguing that rational thought should not be applied to phenomena
>>>of consciousness because there is no consciometer. I am arguing that
>>>the idea that consciousness can be explained in some objective way is
>>>at best profoundly premature (and I continue to suspect that it is
>>>undecideable). 

>>With what would you replace the assumption that consciousness can be
>>explained? The assumption that it _cannot_ be explained? Exactly what sort
>>of research program does that lead to? [Hint: none; you merely advise us
>>to give up and go home before we begin]...
>>What constraint does this place on models?
>It places enormous constraints on models. It says that a model of
>consciousness is unavailable, despite the existence of powerful
>models of information processing.

Right ... and then you say:

>>What predictions does this make? 
>Well, none. It's not a theory, it's a forthright admission that no
>theory is available.

But do you recall claiming the following:

>I think psychology must proceed from consciousness as an axiom. Like many
>other axiomatic systems, it will produce interesting results, while
>leaving open the question of "truth".

But you are pressed, you admit that no (let alone interesting)
results obtain. In fact, what you are doing is axiomatically denying
the possibility of results.

>Imho a useful psychology must accept its limitations
>and accept that something utterly mysterious is happening. This still
>leaves plenty of room for reason, though not as much as one might prefer.
>In exchange, though, the door is opened for compassion.

I am more and more amazed at the conclusions you seem to think I am
forced into. As a monist, I am no more locked out of compassion than
I am forced to feign anesthesia, as you earlier seemed to be claiming.
I dislike being pidgeonholed, as much as I dislike being represented by
strawmen.

You on the other hand, propose to _define_ ourselves as having consciousness,
and rights, and _define_ large classes of other beings as being unconscious
and having no rights.  This is compassion?

>mt

	Max G. Webb


