From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!qt.cs.utexas.edu!yale.edu!jvnc.net!darwin.sura.net!gatech!mcnc!ecsgate!lrc.edu!lehman_ds Tue Jan 28 12:18:29 EST 1992
Article 3200 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!qt.cs.utexas.edu!yale.edu!jvnc.net!darwin.sura.net!gatech!mcnc!ecsgate!lrc.edu!lehman_ds
>From: lehman_ds@lrc.edu
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence Testing
Message-ID: <1992Jan27.182343.160@lrc.edu>
Date: 27 Jan 92 23:23:43 GMT
References: <11774@optima.cs.arizona.edu>
Organization: Lenoir-Rhyne College, Hickory, NC
Lines: 65

In article <11774@optima.cs.arizona.edu>, gudeman@cs.arizona.edu (David Gudeman) writes:
> In article  <42064@dime.cs.umass.edu> Joseph O'Rourke writes:
> ]In article <11722@optima.cs.arizona.edu> gudeman@cs.arizona.edu (David Gudeman) writes:
> ]>It is impossible in principle for one agent to distinguish between
> ]>"knowledge" and "understanding" in another agent, because the
> ]>difference is only sensible to the agent who has (or doesn't have)
> ]>understanding.
> ]
> ]It is also impossible in principle to exclude the possibility that
> ]we were all created a minute ago, memories intact, as Russell pointed
> ]out.  I was responding to the question of why conversation could be 
> ]strong empirical evidence for understanding.  If you define understanding 
> ]to require unobservable "internal self-awareness," then of course
> ]it is impossible to establish beyond the shadow of doubt.
> 
> I am not saying that you can't establish the understanding of a
> machine "beyond the shadow of a doubt", I'm saying you have no reason
> at all to believe that a machine understands just because you can't
> stump it with hard questions.  A human, yes.  A machine, no.  The
> difference is that humans use understanding to answer questions and a
> machine uses syntactic manipulation.  Unless you are prepared to argue
> that understanding is identical to syntactic manipulations, the test
> that proves a human understands tells you nothing about the computer
> (except that the syntactic manipluations are damn good).
> 
> And _of course_ I define understanding to be a matter of internal
> self-awareness.  If understanding were only a matter of behavior, then
> the statement "An entity that behaves as though it understands can be
> assumed to understand" would be trivially true and uninteresting; and
> I would have to be an idiot to argue against it.  Please give me a
> little more credit than that.
> 
> Furthermore, if you want to argue the behaviorist definition of
> understanding then please do so openly (and prepare to be assaulted
> from all sides).  I'm getting really tired of seeing statements that
> implicitly assume the behaviorist definitions of understanding
> without an explicit reference to it.  In my view, this is either a
> sign that the writer is a naive behaviorist who doesn't understand the
> significance of the view (and the serious problems associated with
> it), or it is a sleazy rhetorical tactic that the writer uses to get
> the logical advantages of behaviorism for arguing strong AI without
> committing to the problems that go with it.
> --
> 					David Gudeman
> gudeman@cs.arizona.edu
> noao!arizona!gudeman
    David.. Behavorism was never disporved, but rather pushed aside.
It comes from the fact that people are unwilling to state that they
are only acting from stimuli.
   A good look may reveal another point.  We start with simple rules that
we all have and they are built upon by how be experience the world.
If this is behavorism, then yes i state that anything that appears to be
intelligent is just that BECAUSE i have no other means which to judge it.
   I have also seen the argument alot that we manipulate thoughts while
machine manipulate symbols... then tell me this what is .. 1 ..?
   That is a SYMBOL. Nothing more.  It represents the idea of numerical
singularity.  When we look at the statement : 1+1=2.  We manipulate the
seen symbols into basic sybols to deal with then abstract again to come
up with the representation of the answer.  Notice how I did not say idea.
Even our ideas are symbols of logical forms.  To say that machines cannot
be intelligent because they manipulate symbols places human thought as
unintelligable.
   Drew Lehman
   Lehman_ds@lrc.edu
"Intelligence is simple rules applied to a complex enviroment"


