From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!swrinde!elroy.jpl.nasa.gov!ames!ncar!noao!arizona!gudeman Fri Jan 31 10:27:28 EST 1992
Article 3312 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!swrinde!elroy.jpl.nasa.gov!ames!ncar!noao!arizona!gudeman
>From: gudeman@cs.arizona.edu (David Gudeman)
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence Testing
Message-ID: <12099@optima.cs.arizona.edu>
Date: 30 Jan 92 22:27:17 GMT
Sender: news@cs.arizona.edu
Lines: 84

In article  <386@tdatirv.UUCP> Stanley Friesen writes:
]In article <11920@optima.cs.arizona.edu> gudeman@cs.arizona.edu (David Gudeman) writes:

]So, if a machine duplicated your own mechanism for consciousness (as
]determined by biologists), would you accept it as conscious?  [I said
]duplicated, not simulated].

Not without some cogent argument that could convince me that
consciousness is a consequence of the mechanism.

]The assunmption of anti-AIers seems to be that a machine cannot duplicate
]human mechanisms.

I can't speak for others, but I make no such assumption.  In fact I'm
making no assumptions at all.  I'm merely pointing our _your_
unfounded assumption: namely that duplicating the mechanism must lead
to consciousness.

]You certainly *seemed* to be implying that comprehension of mechanisms
]removes the hypothesis of understanding.

No, all I said is that once you know the behavior can be explained
entirely in terms of programming, you no longer have any motivation
for assuming that the behavior is caused by consciousness.  And that
this does not apply to humans, because we have a completely different
reason to think other humans are conscious besides merely behavior.

]The problem is that descriptions are not adequate for scientific or
]logical analysis.

It is not true that descriptions are inadequate for logical analysis
so long as you restrict your analysis to qualities that follow from
the description.

]Operational (that is testable/observable) definitions
]are required before progress can be made.

So if you want to claim that you can test/observe consciousness it is
up to you to give a definition that makes it testable/observable.  But
if you do, it is up to you to argue that this testable/ovservable
thing is identical to the thing we were discussing before.

]As long as there is only one reasonable way of generating a given behavior,
]then that behavior is circumstantial evidence for that mechanism.
][That is by applying the criterion of 'preponderance of the evidence'].

AAAAAAAGH!  For the 192nd time: THE VERY HYPOTHESIS OF THE TURING TEST
IS THAT THE BEHAVIOR CAN BE GENERATED BY PURELY MECHANICAL MEANS.
That IS another "reasonable" way of generating the behavior.  If you
want to claim that the Turing test is evidence of consciousness it is
up to YOU to show that there is some relationship between this
mechanical behavior and consciousness.  YOU CAN'T JUST KEEP SAYING
THAT CONSCIOUSNESS IS THE ONLY KNOWN WAY OF GENERATING INTELLIGENT
BEHAVIOR AFTER YOU HAVE ASSUMED THAT INTELLIGENT BEHAVIOR CAN BE
GENERATED MECHANICALLY.

Ahem.  Sorry about the yelling, but I'm getting a little frustrated
about people repeating the same mistakes over and over.  If you don't
agree with my above rebutal, rebut it.  Please don't just keep saying
the same things after I've shown them to be wrong without some
argument for why my argument is wrong.

]Actually, in practice, I would like to know enough about the mechanism
]to be sure the system is not giving 'canned' answers.  But beyond that I
]am not sure I care what the mechanism is.

That is the whole point.  Computers can't possibly be programmed to do
anything but give "canned" answers.  That's how computers work.  The
belief that the Turing test shows consciousness amounts to the belief
that computers are conscious _in spite of the fact that they are
giving "canned" answers_.

]  [the program 'printf("I am
]conscious\n");' is a canned answer, a program which examines its internal
]state and generates an answer is *not* using a canned answer].

Then the program 'if (i > 0) printf("I am conscious");' is not a
canned answer?  The problem is that if you assume that a computer can
consciously "examine" its internal state, then you are assuming the
result.
--
					David Gudeman
gudeman@cs.arizona.edu
noao!arizona!gudeman


