From newshub.ccs.yorku.ca!torn!utgpu!pindor Thu Oct  8 10:11:24 EDT 1992
Article 7132 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!torn!utgpu!pindor
>From: pindor@gpu.utcs.utoronto.ca (Andrzej Pindor)
Subject: Re: Brain and Mind (was: Logic and God)
Message-ID: <BvpMGo.KLy@gpu.utcs.utoronto.ca>
Organization: UTCS Public Access
References: <BvI81J.92B@gpu.utcs.utoronto.ca> <1992Oct2.185539.2953@meteor.wisc.edu> <1992Oct2.202342.16039@spss.com> <1992Oct5.022907.6131@meteor.wisc.edu>
Date: Tue, 6 Oct 1992 17:12:23 GMT

In article <1992Oct5.022907.6131@meteor.wisc.edu> tobis@meteor.wisc.edu (Michael Tobis) writes:
..........
>
>Does your system lose its putative consciousness 1) when the wrong rules are 
>substituted 2) when the wrong rules are implemented 3) when the wrong 
>result is output or 4) not at all, regardless of how wrong the algorithm is?
>
Ask the same questions about yourself. (Hint: there are plenty of situations
where you can loose consciousness, with your brain still working)

>Either all procedures are conscious or only some procedures are conscious.
 
Why? Who said so? There are plenty of processes in the brain and only some of
them contribute to consciousness.

>My question is whether you believe an arbitrary algorithm, to which no meaning
>can plausibly be ascribed, is conscious, or whether only certain ones are, and 
>how you can distinguish between the two types if not. What about an algorithm
>which responds appropriately to any Chinese input unless the output is ready on
>an even numbered clock tick, in which case it ersponds with gibberish? Is it
>half conscious?

Who says 'arbitrary algorithm'?
>
>Only the tenacious insistence that intelligent function is identical to
>experience allows one to insist that me following rules I don't understand 
>creates a conscious entity, while me following rules I do understand does not.
>
Again, where did you get this from? I do not see how is it relevant whether
a person following the rules understands them or not.
>
...........
>But I think it's no trick; it's not the slowness of the me+rules system
>that bothers me- it's its structure. I cannot envision a plausible theory
>of subjective consciousness that could allow it to arise from such a system.
>
I would be very interested to hear what plausible theory of subjective 
consciousness can you envision (no restriction on a system from which it
might arise).

.............
>I am convinced that no better test than the Turing Test is possible, because
>there is no plausible objective measure for the presence or absence of
>subjective phenomena. I have no doubt that many in AI would consider a
>implementation that passed the Turing Test as often as a human as
>a great success. Given that such a construct is a goal that some
>have in mind, reaching that goal would be inadequate evidence of the existence
>of the phenomenon of subjective consciousness implicitly being searched for.
>
In your earlier posting you've claimed that some animals also seem to be 
conscious. Have they passed TT? What criteria are used for stating that they
show consciousness and why couldn't a computer with a proper program satisfy
these criteria? 
..............

Andrzej Pindor
>
>mt
>


-- 
Andrzej Pindor
University of Toronto
Computing Services
pindor@gpu.utcs.utoronto.ca


