From newshub.ccs.yorku.ca!torn!utcsri!rutgers!uwvax!meteor!tobis Thu Oct  8 10:10:32 EDT 1992
Article 7053 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca sci.skeptic:20653 comp.ai.philosophy:7053
Path: newshub.ccs.yorku.ca!torn!utcsri!rutgers!uwvax!meteor!tobis
>From: tobis@meteor.wisc.edu (Michael Tobis)
Newsgroups: sci.skeptic,comp.ai.philosophy
Subject: Re: Brain and Mind (was: Logic and God)
Message-ID: <1992Sep28.164828.2122@meteor.wisc.edu>
Date: 28 Sep 92 16:48:28 GMT
References: <1992Sep17.181358.1828@Princeton.EDU> <1992Sep20.180454.4161@daffy.cs.wisc.edu> <1992Sep24.000850.6734@hilbert.cyprs.rain.com>
Organization: University of Wisconsin, Meteorology and Space Science
Lines: 148

In article <1992Sep24.000850.6734@hilbert.cyprs.rain.com> max@hilbert.cyprs.rain.com (Max Webb) writes:
>In article <1992Sep20.180454.4161@daffy.cs.wisc.edu> tobis@xrap3.ssec.wisc.edu (Michael Tobis) writes:
>>To apply reason to consciousness we must take consciousness
>>to be axiomatic. This is a long way from taking it to be explained.
>
>You have said this several times. Several times, both in email and
>in public I have asked you to explain what 'taking consciousness
>to be axiomatic' means. 

I mean this more in a context of psychology than philosophy. I conclude
that objective verification of consciousness is impossible at current
levels of understanding, and suspect it may be impossible in theory.
Nevertheless, we have subjective verification of consciousness. Since
"psyche" rather than physics is the appropriate subject matter of
psychology, rational psychology should, imho, assume the existence of
a phenomenon which is objectively unverifiable. The more commonly taken
approaches, as far as I can see are 1) to deny that consciousness is
meaningful and to redefine psychology as a study of behavior rather than
mind; or 2) to deny that consciousness is problematic, and pretend it 
reduces somehow to an information processing problem. It seems to me
that both of these approaches trivialize and demean experience (in the
existential sense) and can only lead to muddled and confused theories
of the psyche. Imho a useful psychology must accept its limitations
and accept that something utterly mysterious is happening. This still
leaves plenty of room for reason, though not as much as one might prefer.
In exchange, though, the door is opened for compassion.

>What constraint does this place on models?

It places enormous constraints on models. It says that a model of
consciousness is unavailable, despite the existence of powerful
models of information processing. I suspect that it may always 
be unavailable, but it is certainly not "just around the corner", as
so many people seem smugly convinced.

>What predictions does this make? 

Well, none. It's not a theory, it's a forthright admission that no
theory is available.

>I know that I am conscious, and am
>willing to take that as an axiom. I am not thereby transformed into
>a substance dualist like yourself. 

Well, I don't know what you mean by "substance". I just don't see
that handwaving about information processing somehow magically
implies an entity with an experience. The question is which side
bears the burden of proof. 

Monists take the point of view that since
science has made so much progress in explaining all other previously
mysterious phenomena, it is appropriate to assume that the phenomenon
of consciousness will eventually (and even soon) be included in the
same or a similar slightly extended model. In fact, I only object to
this assumption when it is not made explicit and treated as a demonstrated
fact rather than an assumption or an intuition.

Dualists take the point of view that since objective science has made
absolutely no progress in explaining how events of the experiential
type somehow "emerge" from physical events, it is appropriate to suspect
that there is a deep-seated flaw in attempting such explanations. 

Monists take the angle that physical science has shed light on almost 
everything, and only a few small corners remain obscure. Dualists observe
that the single most important phenomenon in the universe, THE FACT THAT
THERE'S ANYONE HERE TO ASK THESE QUESTIONS, remains completely inaccessible
to the structure of physical science, and handwaving about emergent 
properties aside, seems likely to remain inaccessible for the foreseeable
future.

>Care to answer the question this time?

Sorry, I've been a bit busy. Will that do?

>>|> In article <1992Sep13.194856.21976@meteor.wisc.edu> tobis@meteor.wisc.edu (Michael Tobis) writes:
>>I am NOT arguing that rational thought should not be applied to phenomena
>>of consciousness because there is no consciometer. I am arguing that
>>the idea that consciousness can be explained in some objective way is
>>at best profoundly premature (and I continue to suspect that it is
>>undecideable). 
>
>With what would you replace the assumption that consciousness can be
>explained? The assumption that it _cannot_ be explained? Exactly what sort
>of research program does that lead to? [Hint: none; you merely advise us
>to give up and go home before we begin].

Well, if you want to devote yourself to something that may turn out to
be futile, go ahead. I think that cognitive information processing
models of intelligence may be quite useful, as long as people are not
so quick to assume that intelligence in the sense of getting the right
answer to a question is the same as consciousness in the sense of
"it is like something to be" an entity that processes the information,
i.e., that the entity has an experience. I don't object to AI. I only
object to the assumption that AI is artificial consciousness. That is
far from proven, and as I have argued, probably can not be proven.

>If you still think Searles argument is convincing, please say so, and
>we can air it out in public. I don't think it will stand up too well.

I hereby say (again) that Searle's Chinese Room argument is compelling,
and that arguments against it make a deep and irreperable category
mistake. Searle points out that, granting that natural language algorithms
exist, a non-Chinese speaking human executing the algorithm will 
certainly give functionally correct responses in Chinese to inputs
in Chinese. The question is whether any "understanding" of Chinese
thereby occurs. It is clear that the algorithm is a static mathematical
structure, and that the human does not understand the conversation he is
facilitating, although functionally appropriate exchanges in Chinese are
occuring. The point of this description is to clarify the difference between
functional Chinese and experiential Chinese: the latter is obviously not
occuring.

Your example of the Hottentot multiplying, my more mundane example of
the freshman differentiating a polynomial, are similar. Neither understands
what they are doing. In a sense the Hottentot is multiplying and the
freshman is differentiating; but in a sense they are not. They get the
right answer, but they don't have any experience of what that means or why
anyone would be interested.

The process of Chinese-speaking, multiplying, or differentiating may
be successfully implemented, without any conscious entity grasping
what is happening. This all exemplifies that just because intelligent
information processing is happening does not imply that conscious
understanding is happening.

Attacks on this argument say that it is objectively empty. This is
true and entirely misses the point. The argument is SUBJECTIVELY profound and 
compelling, provided one does not assume that there "must be" a monist
answer to the mind/body problem. Lacking that assumption, I conclude 
that the problem of artificial consciousness is completely disjoint
from the problem of artificial intelligence. Taking the monist
assumption, you conclude that there is no difference between intelligent
behavior and and consciousness. (I think this is otherwise flawed, too:
I can think of examples of consciousness without intelligence as well as
intelligence without consciousness.)

I repeat: if you make your assumption explicit and tentative, fine. Go
with it, see where it leads, prove me wrong if you can. However, to
take it as proven, to dismiss me as superstitious because I believe
otherwise (often with a sneer of contempt - not so much you, Max, but
others certainly) and to proceed to assign rights to our models as if they
were the thing modelled is ill-considered and dangerous in the extreme.

No one is going to arrest me for polluting my ocean model! Why should they
be concerned if I inflict torture on some model of cognitive processes? I'm
not actually bothering anyone in either case.

mt


