From newshub.ccs.yorku.ca!torn!cs.utexas.edu!sun-barr!olivea!uunet!tdat!swf Wed Oct 14 14:58:01 EDT 1992
Article 7153 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!sun-barr!olivea!uunet!tdat!swf
>From: swf@teradata.com (Stanley Friesen)
Newsgroups: comp.ai.philosophy
Subject: Re: Brain and Mind (was: Logic and God)
Message-ID: <1222@tdat.teradata.COM>
Date: 7 Oct 92 23:37:08 GMT
References: <1992Oct2.202342.16039@spss.com> <1992Oct5.022907.6131@meteor.wisc.edu> <BvpMGo.KLy@gpu.utcs.utoronto.ca> <1992Oct6.204155.13168@meteor.wisc.edu>
Sender: news@tdat.teradata.COM
Reply-To: swf@tdat.teradata.com (Stanley Friesen)
Organization: NCR Teradata Database Business Unit
Lines: 124

In article <1992Oct6.204155.13168@meteor.wisc.edu> tobis@meteor.wisc.edu (Michael Tobis) writes:
|In article <BvpMGo.KLy@gpu.utcs.utoronto.ca> pindor@gpu.utcs.utoronto.ca (Andrzej Pindor) writes:
|
|>Ask the same questions about yourself. (Hint: there are plenty of situations
|>where you can loose consciousness, with your brain still working)
|
|I lose consciousness when I cease to have an experience. Since I cannot
|imagine an identification between this phenomenon and anything purely
|algorithmic, prehaps due to some flaw in my own intellectual structure,
|it is unfair to have me guessing the answers to these questions. I think
|I can create a counterargument to any of them, though.

Well, I can see one very basic component to loss of consciousness.
The brain, in the unconscious state, has ceased recording most events
in long term memory.  Thus events that occur during this time are not
incorporated into the internal world-models the brain maintains.
[In fact these world-models are probably not even updated during
unconsciousness].

|My point is that, since presumably most procedures are not conscious, since
|it is so widely believed that some procedures are conscious, it is
|problematic what sort of distinction arises between them.

True, until we have some characterization of what constitutes consciousness
in the human brain, we cannot precisely formulate what is necessary for
a process to be conscious.

However, the fact that the brain, performing physico-chemical operations,
can achieve it indicates that at least *one* process achieves consciousness.

The only way in which it can be shown that the execution of an algorithm
of some particular sort cannot achieve consciousness is to show that some
critical portion of the process used by the human brain is not representable
as some algorithm.

Penrose *tries* to do this with his quantum consciousness theory, but he
bases his model on an inadquate knowledge of neurology and an invalid
mode of epistimology.

But at least he tried to get past mere semantic hand-waving.

|>Again, where did you get this from? I do not see how is it relevant whether
|>a person following the rules understands them or not.
|
|This is what I gather from the 'systems' reply to the Chinese room question.
|If I implement a Chinese-understanding algorithm that I don't understand,
|it is proposed that a consciousness exists somehow in the 'system' that is
|distinct from my own. On the other hand, if I implement an algorithm that
|I fully understand, say playing tic-tac-toe, no such additional entity is
|proposed.

This comes close.  It is inherent in the simple fact that the brain as a
whole does understand/is conscious, but no individual neuron or glial cell
is.

Thus, attributing consciousness to the brain is *already* the systems reply,
since it ignores the interior components.

The only difference is that, in this case, the individual neuron are not
*capable* of consciousness, while a human-as-a-CPU is.

|>I would be very interested to hear what plausible theory of subjective 
|>consciousness can you envision (no restriction on a system from which it
|>might arise).
|
|I have no such theory and do not need one. It's not me making extravagant
|claims about algorithms.

But you *are* making 'extravagant' claims about consciousness.

What is subjectivity other than some form of recursive self-modelling?

Why should we believe that there is more to it than that?
Why should our own intuitions about ourselves be given so much weight?
[Why, for instance, cannot the whole thing be subsumed in a self-model
in which we model ourselves as being aware or or feeling something]?

| In
|particular, though, the defense that a Chinese understanding consciousness
|somehow comes into existence contingent on my following some rules seems
|to be flawed. Those who insist on defending its existence should come
|up with plausible arguments as to what makes them believe that the sequence
|of rule-implementations could be conscious, when clearly no individual
|rule-implementation can be.

Analogy with nuerons, for which the same arguments acan be made.
No single neuron can be conscious, only a properly designed system composed
of many interacting neurons can be so.

If I used your argument, I would have to conclude that consciousness is
impossible, and must not exist, since a system of multiple parts cannot
have features not intrinsic in the parts.

|The usual response is to refer to my neurons as just such a system. I claim
|that the proposal that I am my neurons is not demonstrated, but the contrary
|idea is rudely dismissed as "unscientific". In fact, though, we have no idea 
|how(or if)experience can arise from matter, and in this most important of all
|philosophical questions, people with a scientific pose respond with much
|emphatic handwaving, but no science responds with a coherent theory.

You seem to be a bit behind in neurobiology.  A great many components of
brain funtion are understood now, more than most people realize.

There is absolutely *no* evidence that there is any process going on that is
not mediated by the physical components of our brains.  (There is some
*slight* suggestion that neuroglia may be important, in addition to the
neurons, but that does not change the basic model much).

Furthermore, there is tremendous evidence, from deficit studies, simulated
neural network studies, PET scans, neural connectivity studies, and others
that most, or all, sorts of capabilities shown by humans can be, and are,
generated by the activities of the components of the brain (mostly neurons).

Thus, the idea that there is more there *does* become unscientific, in that
it lacks any supporting data, and makes no testable claims.

Thus, in the absence of any evidence to the *contrary* it is more appropriate
to tentatively assume that the physical components of the brain are sufficient
to explain its operation.

-- 
sarima@teradata.com			(formerly tdatirv!sarima)
  or
Stanley.Friesen@ElSegundoCA.ncr.com


