From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!aunro!ukma!asuvax!cs.utexas.edu!convex!constellation!a.cs.okstate.edu!onstott Thu Feb 20 15:21:52 EST 1992
Article 3842 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!aunro!ukma!asuvax!cs.utexas.edu!convex!constellation!a.cs.okstate.edu!onstott
>From: onstott@a.cs.okstate.edu (ONSTOTT CHARLES OR)
Newsgroups: comp.ai.philosophy
Subject: Re: Humongous table-lookup misapprehensions
Message-ID: <1992Feb18.204314.19580@a.cs.okstate.edu>
Date: 18 Feb 92 20:43:14 GMT
References: <1992Feb13.201806.26828@psych.toronto.edu> <6189@skye.ed.ac.uk> <1992Feb16.184329.8680@psych.toronto.edu>
Organization: Oklahoma State University, Computer Science, Stillwater
Lines: 127

In article <1992Feb16.184329.8680@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
>In article <6189@skye.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>>In article <1992Feb13.201806.26828@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
>>>In article <1992Feb13.073457.16647@a.cs.okstate.edu> onstott@a.cs.okstate.edu (ONSTOTT CHARLES OR) writes:
>>>>  Of course, one could attempt to redefine intelligence without thinking
>>>>of it as particularly human.  If this be the case, and if we accept
>>>>that such a table is intelligent, and if the research project is founded
>>>>on understanding the mind by means of the table; the look-up table is
>>>>uninteresting because
>>>>all it has done is confirm a particular theory of "intelligence" and
>>>>not a theory of "how-the-mind-works."  
>>>
>>>But many cognitive scientists see the purpose of their discipline not to
>>>explain *human* intelligence, but intelligence *per se*.
>>
>>Don't you think there'd be at least something a bit odd about it
>>if Cog Sci ended up explaining intelligence per se but still not
>>human intelligence?
>
>If we generate a general theory of intelligence which covers *all*
>examples of intelligence then we *will* have explained human
>intelligence.
>
  This is an appropriate procedure in scientific investigation and it
lends to a top down theory.  Lets prove intelligence and then prove
the mind.  I think, on the other hand, a bottom up theory is much more
advantageous.  Becuase if we find the workings of the mind, then we
would know a lot more about what we are calling "intelligence."  I am
not sure that "intelligence" can be defined per se because most definitions
of intelligence exclude large degrees of human behavior.  HOwever, if
we say intelligence means *this* and then say whatever contrary behavoir
we get from a human being is not *this* then whatever that behaviour is, it is
not intelligence, then you have completely lost me.  You have lost me
because it seems that intelligence can then be anything that we deem it
so long as it corresponds to some element of human behaviour.  I am not
sure, however, that a lot of the intelligence definitions haven't already
excluded human behavoir.  
  One such behavior, sorry Jeff, is intelligence based on a definition
of speed or efficency or look-up.  These may very well make sense in 
reference to a computer, such as an "intelligent terminal" or the type
of intelligence that Allen Newell writes about.  But, these things really
only work in so far as computer is concerned.  Just because it took Yeats
years to write some of his poems, or because it may take me 20 minutes 
to think through a proof, do we deem ourselves more or less intelligent?
If you want to call intelligence speed, then it only works in reference
to a computer.  However, intelligence seems to have more to do with
relation and connection; the ability to perceive something (by whatever
method) and connect it up to produce something creative and new.  Genius
is thought to have existed in the writings of Kepler, Augustine, Plato
and we deem these people intelligence.  First, who knows how fast they were
at determining the things they did.  Second, these writers are generally
accepted to be outdated and wrong(forgive me if there be any Platonists reading
this).  But just because they are wrong, we don't call them unintelligent,
or not genius.  I think we are looking in the wrong place entirely.

  But to make such a statement, invalidates what I was talking about above-
I am assuming some sort of top down method. I still think we must know 
more about the mind, perhaps in a biological sense, before we can start
creating things from silicone and saying "Hey it's smart."  
  
>>_AI_, on the other hand, can be content with creating some kind
>>of intelligence, even if it has little to do with human intelligence.
 This is correct so long as AI admits this is what they are doing.  
Cog Sci, on the other hand, supposes they can go beyond this and this 
is their mistake.

>>
>>Moreover, I think it's reasonable to consider "intelligence" as
>>a question of performance (ie, behavior).  I think that's the way
>>the word is going, at least ("intelligent terminals", etc), and 
>>if any of the words that get debated here are going to be defined
>>in terms of behavior, I think that's the one to pick.

So far as we are not trying to interpret performance as a human 
characterisitc.  I think, Jeff, this is what you intend though.

>>This leaves plenty of room for Artificial Intelligence to go about
>>the task of creating useful programs and machines of various sorts
>>without having to win any of these big debates about intentionality
>>and understanding.
>
>But is it precisely "these big debates" which make the implications 
>of AI programs different from those that do weather prediction.  If
>you are only interested in "creating useful programms" then AI is
>simply a branch of engineering.  This is fine, but then it must repudiate
>its claim to produce minds.  For me, it is the "big debates" which make
>AI interesting. 
  Ah, but it is usually an AI researcher is is also a Cognitive Psychologist
who claims that it can produce minds.
>
>>On the other hand, the recurring suggestion that, because "flying"
>>has a more or less behavioral definition, all sorts of words like
>>"understanding" and "intentionality" should have a behavioral
>>definition too, looks like an attempt to take over all the words,
>>and so to win the argument by removing all the vocabulary that
>>we use to talk about the interesting cases.  Perhaps we will no
>>longer think these words refer to anything interesting after
>>Cog Sci completes its task; but I think we should wait until
>>that happens.
>
>Well, I for one would *not* advocate a purely behavioural definition
>of "understanding" and "intentionality".  But then again, these things
>differ from "flying" precisely because of their subjective component.
>
  The point being made by Jeff, I think, was not that you are strictly 
behaviouristic but rather that you assume a connection between something
called "understanding" in a finite sense with that of human behavior so that
in the end once you have confused us enough with this; we can not win
the argument simply because you would insist that "understanding" involves
everything from human behavoir to computer behavior.  In short, we would
argue "But that isn't human behavoir" and you would say "Yes it is because
computer understanding is the same as human behavior" just because your
definition thinks it so.

BCnya,
  Charles O. Onstott, III

------------------------------------------------------------------------
Charles O. Onstott, III                  P.O. Box 2386
Undergraduate in Philosophy              Stillwater, Ok  74076
Oklahoma State University                onstott@a.cs.okstate.edu


"The most abstract system of philosophy is, in its method and purpose, 
nothing more than an extremely ingenious combination of natural sounds."
                                              -- Carl G. Jung
-----------------------------------------------------------------------


