From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!tdatirv!sarima Tue Jan 28 12:18:24 EST 1992
Article 3194 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!tdatirv!sarima
>From: sarima@tdatirv.UUCP (Stanley Friesen)
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence Testing
Message-ID: <386@tdatirv.UUCP>
Date: 27 Jan 92 22:23:13 GMT
References: <11920@optima.cs.arizona.edu>
Reply-To: sarima@tdatirv.UUCP (Stanley Friesen)
Organization: Teradata Corp., Irvine
Lines: 88

In article <11920@optima.cs.arizona.edu> gudeman@cs.arizona.edu (David Gudeman) writes:
|In article  <1992Jan25.230015.9475@mp.cs.niu.edu> Neil Rickert writes:
|] I take it then that once somebody comes out with a full explanation of
|]human behavior, people will stop being intelligent!
|
|It is amazing how many AI'ers come up with this particular bit of
|rhetorical quackery.  Or is their misunderstanding of the issues
|really that profound?  ...
|
|It takes little faith to believe that other humans are like you in
|this regard, regardless of any ability to explain their actions
|otherwise.  For even if there was a purely physical way to explain
|their behavior, the same mechanisms would work in you, and you would
|still be able to sense your own consciousness. 

I would include in a full explanation of human behavior an explanation of
the mechanism by which the sensation of consciousness arises.   After all
the idea here is a complete 'internal' explanation, not a 'behaviorist' one.
[That is it covers mechanisms, not just stimulus-response patterns].

So, if a machine duplicated your own mechanism for consciousness (as determined
by biologists), would you accept it as conscious?  [I said duplicated, not
simulated].

|logical reason to suppose that just because you have set up a physical
|device to mimic the behavior of a human, that that device must also
|have this form of consciousness.

But what if it does more than mimic, what if it uses the same mechanisms?


The assunmption of anti-AIers seems to be that a machine cannot duplicate
human mechanisms.  Well, I will have to see that proven, by an explication
of human mechanisms that are beyond machine capability.

|] No!  It is people like you who insist that because you don't comprehend
|]the workings of the brain, therefore the brain understands,
|
|The sentence above is proof that either you are completely
|misunderstanding my view or that you are not carrying on this
|discussion in an intellectually honest manner.

It seems to be an appropriate response to what you just said.
You certainly *seemed* to be implying that comprehension of mechanisms
removes the hypothesis of understanding.

|I have not refused to define any words.  In fact I have many times
|given, if not definitions, then descriptions of what I mean by words,
|and tried to get people either (1) to deny my descriptions or (2) to
|argue their points such that they are still valid using my
|descriptions.  So far only one person has had the courage to try the
|first, and no one has even come close to the second.

The problem is that descriptions are not adequate for scientific or
logical analysis.  Operational (that is testable/observable) definitions
are required before progress can be made.

|The AI position --at lease as it is argued on this group-- seems to
|involve saying that behavior is adequate evidence of consciousness,
|even though they are unwilling to accept that consciousness is defined
|by behavior.  And no one has explained what other sort of relationship
|they might have that lets behavior be evidence of consciousness.

The lack of any real (as opposed to hypothetical) counter-examples.

As long as there is only one reasonable way of generating a given behavior,
then that behavior is circumstantial evidence for that mechanism.
[That is by applying the criterion of 'preponderance of the evidence'].

| I
|maintain that the only relationship they have is that consciousness
|causes the behavior.  But clearly this relationship is not enough to
|say that behavior is evidence of consciousness.

Logically true, but practically meaningless.  I have yet to see a *believable*
alternative mechanism.  So, for now at least, behavior is a good initial test.



Actually, in practice, I would like to know enough about the mechanism
to be sure the system is not giving 'canned' answers.  But beyond that I
am not sure I care what the mechanism is.  [the program 'printf("I am
conscious\n");' is a canned answer, a program which examines its internal
state and generates an answer is *not* using a canned answer].
-- 
---------------
uunet!tdatirv!sarima				(Stanley Friesen)



