From newshub.ccs.yorku.ca!torn!cs.utexas.edu!sdd.hp.com!caen!uwm.edu!daffy!uwvax!meteor!tobis Thu Oct  8 10:10:54 EDT 1992
Article 7084 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!sdd.hp.com!caen!uwm.edu!daffy!uwvax!meteor!tobis
>From: tobis@meteor.wisc.edu (Michael Tobis)
Subject: Re: Brain and Mind (was: Logic and God)
Message-ID: <1992Oct1.210056.13084@meteor.wisc.edu>
Organization: University of Wisconsin, Meteorology and Space Science
References: <1992Sep24.000850.6734@hilbert.cyprs.rain.com> <1992Sep28.164828.2122@meteor.wisc.edu> <1992Sep30.205233.662@hilbert.cyprs.rain.com>
Date: Thu, 1 Oct 92 21:00:56 GMT
Lines: 223

I've been looking forward to Max's response, and am disappointed to see 
that the cordial tone which we had maintained up to now is dissipating.

I would like to assure you all that 1) I have no intention of misrepresenting
anyone else's ideas, and I freely apologize if I have inadvertently done
so, and 2) I have no intention of taking the sorts of manipulative debating
techniques so often seen in politics, and if I have failed to address a
particular point that Max perceives as important, it is because I did not
perceive that point as central. If I have been rude, it is unintentional.
If I have transgressed the accepted cultural bounds of philosophy, it is
accidental; I have no formal exposure to the field and all my limited
participation in it up to this point has been solitary.

That said, I pick up the guantlet.

In article <1992Sep30.205233.662@hilbert.cyprs.rain.com> max@hilbert.cyprs.rain.com (Max Webb) writes:
>In article <1992Sep28.164828.2122@meteor.wisc.edu> tobis@meteor.wisc.edu (Michael Tobis) writes:
>>In article <1992Sep24.000850.6734@hilbert.cyprs.rain.com> max@hilbert.cyprs.rain.com (Max Webb) writes:
>>>In article <1992Sep20.180454.4161@daffy.cs.wisc.edu> tobis@xrap3.ssec.wisc.edu (Michael Tobis) writes:
>>Attacks on this [Searles] argument say that it is objectively empty. This is
>>true and entirely misses the point.  ...Taking the monist
>>assumption, you conclude that there is no difference between intelligent
>>behavior and and consciousness...

>You ignored my [systems] reply, and substituted your own, and responded to
>it as if it was mine. [.ie I have never called it 'objectively empty']
>I do not appreciate having words put in my mouth.  Please read for content,
>and respond to what is written, ok?

I fully believe that from the perspective of objectively verifiable results
in science (rather than in philosophy) the argument is empty. 

Perhaps the use of the second person pronoun was ill advised. Still, I think
the confusion of sentience and intelligence is at the root of the flaw in
most arguments on this subject.

>>>If you still think Searles argument is convincing, please say so, and
>>>we can air it out in public. I don't think it will stand up too well.

>>I hereby say (again) that Searle's Chinese Room argument is compelling,
>>and that arguments against it make a deep and irreperable category
>>mistake. Searle points out that, granting that natural language algorithms
>>exist, a non-Chinese speaking human executing the algorithm will 
>>certainly give functionally correct responses in Chinese to inputs
>>in Chinese. ... the human does not understand the conversation he is
>>facilitating...

>You have never even addressed (or even echoed) the substance of my reply...

Perhaps I did not understand it, though I am honestly trying.

>So, allow me to go at it from another angle.

>I subscribe to the idea of 'meaning holism', which is that the meaning of
>a statement is simply it's relationship to all other statements in the mental
>domain of interest. In an otherwise empty mind, 'E=mc^2' has no meaning.
>This popular theory of meaning is usefully applied here.

>In Searle's argument, he forces one domain (chinese) to use one
>representation different from any other representation in use, and
>explicitly refuses to provide the maps between related domains. Thus, he
>prevents statements in chinese from having any relationship to any other
>statements the human entertains.  At the _system_ level only, - rules+human,
>each chinese statement is embedded in a network of relations with the
>other chinese statements, and it is at this level where understanding
>occurs _IF ANY_. (See???!? I am not assuming that if the behavior is lifelike,
>the system is alive - as you state above - I am simply showing that there
>is more than level at which understanding could possibly be occurring.)

Well, I am with you until the very last phrase. The CR argument presumes
"understanding" to be an experiential, rather than functional phenomenon.
If any understanding occurs in the sense in which most people understand
"understanding" then an experience must occur. It is only in that sense
that the Chinese Room argument has meaning, because it demonstrates that
if algorithmic natural language is possible, it follows that natural
language without experiential understanding is possible.

>To illustrate the idea that human+rules and human are separate individuals, we
>can change the problem to playing chess...

I have no problem with them being separate systems. I think our difficulty
traces back to a difference in our definition of what "understanding" means.

>WILL YOU PLEASE ADDRESS THE SYSTEMS REPLY! Your assumption is that the
>'understanding' is either in the _human_ or _nowhere_. The systems
>reply is that the 'understanding', _if anywhere_, is in the system:
>rules + human. THIS is my answer, not this paraphrase you give. I see
>no sign that you have heard me yet, and I am getting discouraged.

The caveat _if anywhere_ above befuddles me. If understanding is nowhere,
then the argument succeeds: artificial intelligence exists but artificial
consciousness does not. If understanding is somewhere in the human +
rules system, are you implying that human + rules has a consciousness
separate from the human? If you are not, then understanding is _not_ in
the system in the sense in which "understanding" is meant, and the argument
succeeds. If you are, then how deos this mysterious sentience arise?
Do I kill an intelligent being by getting bored and deciding not to follow
the rules?

Summarizing: It seems to me that you are proposing that the human + algorithm 
has an experience separate from that of the human alone, which I find
an extremely dubious proposition. Either that, or you are using 
"understanding" in a purely functional way rather than an experiential
one, and hence miss the point of the Chinese Room entirely.

>>>I know that I am conscious, and am
>>>willing to take that as an axiom. I am not thereby transformed into
>>>a substance dualist like yourself. 
>>Well, I don't know what you mean by "substance".

>'Substance' dualism, relies on there being some vague non-physical
>plane of existence or substance in which 'souls' live. 'Property' dualism
>only assumes that statements about 'minds' cannot be rewritten into
>statements about physical facts, ie. not that ectoplasm exists, but
>the reduction attempt fails. Dave Chalmers, for example, is a
>'property' dualist. 

Yes, this is an interesting distinction. Sometimes Mr. Chalmers doesn't 
seem like a dualist at all to me. I am not a dualist on faith: I can be
convinced otherwise by evidence. However, I cannot imagine in what such
evidence might consist: the gap between subjective experience and objective
verifiable fact seems to me insurmountable. Accordingly, I think 
what you call "substance" dualism (I am more uncomfortable with the
designation than with the sort of idea it conveys) needs to be taken
seriously as a hypothesis, and not summarily dismissed just because it
appeals to the superstitious.

>You need to read more philosophy.

Agreed. But the validity of my arguments is independent of how much
philosophy I have read. (Unfortunately, there are constraints of style
and vocabulary that may weaken the effectiveness of my argument among
those better read than myself.)

[...]

>>>What constraint does this place on models?

>>It places enormous constraints on models. It says that a model of
>>consciousness is unavailable, despite the existence of powerful
>>models of information processing.

>Right ... and then you say:

>>>What predictions does this make? 
>>Well, none. It's not a theory, it's a forthright admission that no
>>theory is available.

>But do you recall claiming the following:

>>I think psychology must proceed from consciousness as an axiom. Like many
>>other axiomatic systems, it will produce interesting results, while
>>leaving open the question of "truth".

>But you are pressed, you admit that no (let alone interesting)
>results obtain. In fact, what you are doing is axiomatically denying
>the possibility of results.

This is an apparent contradiction, but I think I can get out of it by
more precise statements. I think useful results in psychology are
applications: therapeutic and educational applications principally.
Even in applications, I think recogntion of the fact that success can
only be defined subjectively and experientially should be prominent in
such research. On the other hand, I believe that no complete theory of
the psyche comparable to that available in other sciences is available.

Progress in a complete system science of consciousness, some may claim,
has been made, thorough cooperation with neurophysiology. While I agree
that such results are interesting and useful, I do not believe that they
can lead to a complete system science that can be integrated with other
fields of science. I believe this way because the central phenomenon of
psychology is not defineable in a way that puts it within the domain of
objective science.

I do not believe a complete and noncontroversial model of how consciousness
"emerges" from or "connects" to physical reality is forthcoming or plausible.
This is the only prediction my theory makes, and it is a negative one.
It can be tested only by counterexample. I eagerly await such a
counterexample. An objective test of the presence or absence of sentience
would do nicely.

>>Imho a useful psychology must accept its limitations
>>and accept that something utterly mysterious is happening. This still
>>leaves plenty of room for reason, though not as much as one might prefer.
>>In exchange, though, the door is opened for compassion.

>I am more and more amazed at the conclusions you seem to think I am
>forced into. As a monist, I am no more locked out of compassion than
>I am forced to feign anesthesia, as you earlier seemed to be claiming.

I do not claim that you lack compassion or sentience. I claim that efforts
at a complete theory of psychology, in attempting to make an objective
discipline out of subjective facts, are forced into a position of
sterility and futility. I claim that no such theory is available or
remotely plausible with the current knowledge and methods considered
to be scientific.

>I dislike being pidgeonholed, as much as I dislike being represented by
>strawmen.

Quite unintentional, I assure you. My regrets.

>You on the other hand, propose to _define_ ourselves as having consciousness,
>and rights, and _define_ large classes of other beings as being unconscious
>and having no rights.  This is compassion?

Not really. It's prudence, though. We need a genuine test for consciousness
before assigning rights to constructs. The costs of an erroneous denial
of rights to genuine entities may be severe, but the costs of an erroneous
assignation of rights to nonentities may be infinite: life, which for all
we know may be unique to this planet, may end up being supplanted by non-life.

I for one have the intuition that such constructs will not be conscious.
Most workers in the field have the opposite intuition.  (Furthermore, I
do not believe my ocean model to be wet, either.) Unfortunately, some
means needs to be found to weigh our respective intuitions. I propose 
that the bulk of opinion among AI workers is not unbiased - it is
somewhat like polling Perot volunteers on whether Perot should re-enter
the presidential race. I further propose that some consideration be
given to the costs of a wrong decision in this matter.

mt



