From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!elroy.jpl.nasa.gov!ncar!noao!arizona!gudeman Tue Jan 28 12:17:57 EST 1992
Article 3161 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!elroy.jpl.nasa.gov!ncar!noao!arizona!gudeman
>From: gudeman@cs.arizona.edu (David Gudeman)
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence Testing
Message-ID: <11920@optima.cs.arizona.edu>
Date: 26 Jan 92 20:37:10 GMT
Sender: news@cs.arizona.edu
Lines: 62

In article  <1992Jan25.230015.9475@mp.cs.niu.edu> Neil Rickert writes:
]In article <11906@optima.cs.arizona.edu> gudeman@cs.arizona.edu (David Gudeman) writes:
]>
]>I don't have any problem believing that machine intelligence is
]>possible, I just don't think you can say that some behavior is a sign
]>of intelligence when you can completely explain the behavior without
]>refering to intelligence.  That sort of belief is completely
]>unmotivated.  (Or motivated by sloppy thinking.)
]
] I take it then that once somebody comes out with a full explanation of
]human behavior, people will stop being intelligent!
]

It is amazing how many AI'ers come up with this particular bit of
rhetorical quackery.  Or is their misunderstanding of the issues
really that profound?  As I have written at least twenty times in the
last couple of months: the belief that humans are conscious is not
based on behavior but on introspection.  Unless you have achieved a
remarkable level of philosophical sophistication, you are not able to
doubt that you yourself are conscious, aware, and _thinking_ in a way
that is different from the inanimate.

It takes little faith to believe that other humans are like you in
this regard, regardless of any ability to explain their actions
otherwise.  For even if there was a purely physical way to explain
their behavior, the same mechanisms would work in you, and you would
still be able to sense your own consciousness.  However, there is no
logical reason to suppose that just because you have set up a physical
device to mimic the behavior of a human, that that device must also
have this form of consciousness.

] No!  It is people like you who insist that because you don't comprehend
]the workings of the brain, therefore the brain understands,

The sentence above is proof that either you are completely
misunderstanding my view or that you are not carrying on this
discussion in an intellectually honest manner.

] It is perfectly valid for you to say to the pro-AI folk "I don't believe
]you - put up or shut up".  It is invalid to say that AI is proven invalid
]based on some huge incomprehensible "explanation" full of vague words which
]you refuse to define.

I have not refused to define any words.  In fact I have many times
given, if not definitions, then descriptions of what I mean by words,
and tried to get people either (1) to deny my descriptions or (2) to
argue their points such that they are still valid using my
descriptions.  So far only one person has had the courage to try the
first, and no one has even come close to the second.

The AI position --at lease as it is argued on this group-- seems to
involve saying that behavior is adequate evidence of consciousness,
even though they are unwilling to accept that consciousness is defined
by behavior.  And no one has explained what other sort of relationship
they might have that lets behavior be evidence of consciousness.  I
maintain that the only relationship they have is that consciousness
causes the behavior.  But clearly this relationship is not enough to
say that behavior is evidence of consciousness.
--
					David Gudeman
gudeman@cs.arizona.edu
noao!arizona!gudeman


