From newshub.ccs.yorku.ca!torn!utcsri!rutgers!uwvax!meteor!tobis Thu Oct  8 10:11:28 EDT 1992
Article 7138 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!utcsri!rutgers!uwvax!meteor!tobis
>From: tobis@meteor.wisc.edu (Michael Tobis)
Newsgroups: comp.ai.philosophy
Subject: Re: Brain and Mind (was: Logic and God)
Message-ID: <1992Oct6.204155.13168@meteor.wisc.edu>
Date: 6 Oct 92 20:41:55 GMT
References: <1992Oct2.202342.16039@spss.com> <1992Oct5.022907.6131@meteor.wisc.edu> <BvpMGo.KLy@gpu.utcs.utoronto.ca>
Organization: University of Wisconsin, Meteorology and Space Science
Lines: 90

In article <BvpMGo.KLy@gpu.utcs.utoronto.ca> pindor@gpu.utcs.utoronto.ca (Andrzej Pindor) writes:
>In article <1992Oct5.022907.6131@meteor.wisc.edu> tobis@meteor.wisc.edu (Michael Tobis) writes:

>>Does your system lose its putative consciousness 1) when the wrong rules are 
>>substituted 2) when the wrong rules are implemented 3) when the wrong 
>>result is output or 4) not at all, regardless of how wrong the algorithm is?

>Ask the same questions about yourself. (Hint: there are plenty of situations
>where you can loose consciousness, with your brain still working)

I lose consciousness when I cease to have an experience. Since I cannot
imagine an identification between this phenomenon and anything purely
algorithmic, prehaps due to some flaw in my own intellectual structure,
it is unfair to have me guessing the answers to these questions. I think
I can create a counterargument to any of them, though.

>>Either all procedures are conscious or only some procedures are conscious.
 
>Why? Who said so? There are plenty of processes in the brain and only some of
>them contribute to consciousness.

I think you misread this. I was only making a noncontroversial logical
disjunction, on the hypothesis that conscious procedures exist. Perhaps you
read 'none' where I wrote 'some'.

My point is that, since presumably most procedures are not conscious, since
it is so widely believed that some procedures are conscious, it is
problematic what sort of distinction arises between them.

>>Only the tenacious insistence that intelligent function is identical to
>>experience allows one to insist that me following rules I don't understand 
>>creates a conscious entity, while me following rules I do understand does not.

>Again, where did you get this from? I do not see how is it relevant whether
>a person following the rules understands them or not.

This is what I gather from the 'systems' reply to the Chinese room question.
If I implement a Chinese-understanding algorithm that I don't understand,
it is proposed that a consciousness exists somehow in the 'system' that is
distinct from my own. On the other hand, if I implement an algorithm that
I fully understand, say playing tic-tac-toe, no such additional entity is
proposed.

I should note that there is an array of opinions against me here. (I'm not
paranoid; what else could I expect?) I may be answering someone else's
model and not yours. 

>>But I think it's no trick; it's not the slowness of the me+rules system
>>that bothers me- it's its structure. I cannot envision a plausible theory
>>of subjective consciousness that could allow it to arise from such a system.

>I would be very interested to hear what plausible theory of subjective 
>consciousness can you envision (no restriction on a system from which it
>might arise).

I have no such theory and do not need one. It's not me making extravagant
claims about algorithms. As I have said a few times, I do not believe
such a theory is in prospect and I fail to see how such a theory could
be verified within the established methods of objective science. In
particular, though, the defense that a Chinese understanding consciousness
somehow comes into existence contingent on my following some rules seems
to be flawed. Those who insist on defending its existence should come
up with plausible arguments as to what makes them believe that the sequence
of rule-implementations could be conscious, when clearly no individual
rule-implementation can be.

The usual response is to refer to my neurons as just such a system. I claim
that the proposal that I am my neurons is not demonstrated, but the contrary
idea is rudely dismissed as "unscientific". In fact, though, we have no idea 
how (or if) experience can arise from matter, and in this most important of all
philosophical questions, people with a scientific pose respond with much
emphatic handwaving, but no science responds with a coherent theory.

>In your earlier posting you've claimed that some animals also seem to be 
>conscious. Have they passed TT? 

If you take my belief that Fluffy the Cat is conscious as a realization of
the Turing Test, perhaps so.

>What criteria are used for stating that they
>show consciousness and why couldn't a computer with a proper program satisfy
>these criteria? 

1) I have, by analogy, access to the experience of the cat, who is
after all a distant cousin, while I have no such access to the proposed
experience of your constructs. 2) The cat was not designed to pass the
Turing Test, so my intuitions are more trustworthy than in a case where
passing the test is (at least implicitly) a design goal.

mt


