From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!ogicse!milton!forbis Tue Jan 28 12:18:20 EST 1992
Article 3189 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!ogicse!milton!forbis
>From: forbis@milton.u.washington.edu (Gary Forbis)
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence Testing
Message-ID: <1992Jan28.024635.21644@milton.u.washington.edu>
Date: 28 Jan 92 02:46:35 GMT
Article-I.D.: milton.1992Jan28.024635.21644
References: <11920@optima.cs.arizona.edu>
Organization: University of Washington, Seattle
Lines: 104

In article <11920@optima.cs.arizona.edu> gudeman@cs.arizona.edu (David Gudeman) writes:
>In article  <1992Jan25.230015.9475@mp.cs.niu.edu> Neil Rickert writes:
>]In article <11906@optima.cs.arizona.edu> gudeman@cs.arizona.edu (David Gudeman) writes:
>]>
>]>I don't have any problem believing that machine intelligence is
>]>possible, I just don't think you can say that some behavior is a sign
>]>of intelligence when you can completely explain the behavior without
>]>refering to intelligence.  That sort of belief is completely
>]>unmotivated.  (Or motivated by sloppy thinking.)
>]
>] I take it then that once somebody comes out with a full explanation of
>]human behavior, people will stop being intelligent!
>]
>
>It is amazing how many AI'ers come up with this particular bit of
>rhetorical quackery.  Or is their misunderstanding of the issues
>really that profound?  As I have written at least twenty times in the
>last couple of months: the belief that humans are conscious is not
>based on behavior but on introspection.  Unless you have achieved a
>remarkable level of philosophical sophistication, you are not able to
>doubt that you yourself are conscious, aware, and _thinking_ in a way
>that is different from the inanimate.

A couple of points.

I think that introspection leads me to believe I am conscious but I don't see
how this can be extended to "humans are conscious".  I remember the quip to
"I know I exist but how do I know you exist" as "Who wants to know?" but my
solution was "I do but I'll never know but I don't like thinking I talk to
myself so I'll assume you exist."  By using the category "human" I have 
already drawn some parallels between these assumed other entities and myself.
If I do not use behavior to attribute consciousness to these other entities
why do I say some are dead or unconscious?

Now as to my consciousness...

It is not clear to me that consciousness, awareness, and thinking are by
way of some process uniquely human.  I'm not sure there is anything acting
upon the physical body I label "me" which does not act upon the inanimate.
I think the differences are not in kind but in complexity.  Just as eddies
appear in the flow of a river I have appeared in the flow of time.  It is
not clear to me that my consciousness has any effect on that flow though
it is clear that flow has an effect on me.  Can I choose who I am or who I
am to be? 

The acts of the body cannot deny entrophy.  The is no will which refuses to
obey physical laws.  References to "me" refer to something else.  The
observation of thinking is not the thinking observed so when "I think there
for I am" is observed the observer and the producer are not one and the same
though there is a feedback mechanism between the two.  It matter not, all
systemic boundaries are arbitrary and there is a closeness between these
two entities that does not exist between them and anything else the observer
observes.

>It takes little faith to believe that other humans are like you in
>this regard, regardless of any ability to explain their actions
>otherwise.  For even if there was a purely physical way to explain
>their behavior, the same mechanisms would work in you, and you would
>still be able to sense your own consciousness.  However, there is no
>logical reason to suppose that just because you have set up a physical
>device to mimic the behavior of a human, that that device must also
>have this form of consciousness.

Are you really refering to the actions of other humans here?  Isn't this
a behavioral observation?  I think it is this behavior rather than the
identification of humanness which leads one to assume consciousness.

>] No!  It is people like you who insist that because you don't comprehend
>]the workings of the brain, therefore the brain understands,
>
>The sentence above is proof that either you are completely
>misunderstanding my view or that you are not carrying on this
>discussion in an intellectually honest manner.
>
>] It is perfectly valid for you to say to the pro-AI folk "I don't believe
>]you - put up or shut up".  It is invalid to say that AI is proven invalid
>]based on some huge incomprehensible "explanation" full of vague words which
>]you refuse to define.
>
>I have not refused to define any words.  In fact I have many times
>given, if not definitions, then descriptions of what I mean by words,
>and tried to get people either (1) to deny my descriptions or (2) to
>argue their points such that they are still valid using my
>descriptions.  So far only one person has had the courage to try the
>first, and no one has even come close to the second.
>
>The AI position --at lease as it is argued on this group-- seems to
>involve saying that behavior is adequate evidence of consciousness,
>even though they are unwilling to accept that consciousness is defined
>by behavior.  And no one has explained what other sort of relationship
>they might have that lets behavior be evidence of consciousness.  I
>maintain that the only relationship they have is that consciousness
>causes the behavior.  But clearly this relationship is not enough to
>say that behavior is evidence of consciousness.

I think it is clear that even you apply behavior as a criterion for 
consciousness.  It is equally as clear you have, through no defined reason,
denied the same criterion as sufficient for non-humans.

>					David Gudeman
>gudeman@cs.arizona.edu
>noao!arizona!gudeman

--gary forbis@u.washington.edu


