From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!swrinde!gatech!mcnc!ecsgate!lrc.edu!lehman_ds Tue Jan 21 09:27:25 EST 1992
Article 2917 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!swrinde!gatech!mcnc!ecsgate!lrc.edu!lehman_ds
>From: lehman_ds@lrc.edu
Newsgroups: comp.ai.philosophy
Subject: Re: Do birds cause flight?
Message-ID: <1992Jan19.201907.123@lrc.edu>
Date: 20 Jan 92 01:19:06 GMT
References: <CMM.0.90.2.695770899.jrk@division.cs.columbia.edu>
Organization: Lenoir-Rhyne College, Hickory, NC
Lines: 60

In article <CMM.0.90.2.695770899.jrk@division.cs.columbia.edu>, jrk@division.cs.columbia.edu (John R. Kender) writes:
> There is a component to this "*really* understands" debate that I
> haven't yet seen explored, and I would be grateful for those of you
> who are more trained in analytic philosophy to comment on it.
> 
> It appears to me that much of the problem with words like
> "understand", Minsky's "live", and simpler works like "fly", "see",
> "talk", or even "cook", is that they evolve over time as technology
> advances.  Much of the current debate might be attributable to the
> cultural lag in fitting old words with new meanings.  Indeed, this is
> how I interpret the intention of Turing's paper: it is an exercise in
> the redefinition of the word "think".  Similarly, Searle's arguments
> appear to be, at least in part, an attempt to reaffirm the
> contemporary meaning of "understand" in the face of technological
> challenge.
> 
 ..Much deleted due to size..
> 
> To some extent, all the rest is up to engineering.  It may or may not
> be possible to construct a machine that "understands" under given
> constraints, just as the Gossamer Condor is at one extreme, and the
> apparent impossibility of nuclear-powered aircraft is at the other.
> The history of aviation has long since won the linguistic war, and is
> now profitably and non-controversially concerned with traditional
> engineering matters of efficiency and scale.  Aviation, however, has
> the advantage of the underlying theories of physics.  The underlying
> theories of cognition, in my opinion, remain closer to the "flying
> principles" of the scholastics, making it difficult for the common man to
> straightforwardly use "understand" with the same culturally accepted
> agreement as it affords to the various senses of "fly".
> 
> John
  I find this one of the most intelligent arguments I've seen here..
It seems that everyone is arguing over terms and phrases that we all have
a different concept of.  For example when I say "John said.." we all think 
of a diferent John... or for the most part most of us do.  \the same can
be said for the terms being flung around like blunt instuments..
  I see arguments for "understanding" and "Intelligence" but no set definition
of such words.  When debating in Philosophy, the first step is to define
the premises of the argument, until this can be done all arguments are moot.
  By saying that intelligence can only come from living creatures or just
humans, you place yourself in one argument that of course can not be
met by machine simply becasue of the definition, in effect taking the easy 
way out.  Much to waht I see goes back to the age long argument of
Rationalism vs. Emiricism.  The Rationalist says that there is something
that we can know without learning, in affect saying that there is a mind
and not just the physical function of the brain.  The emirisist says that
there is no such thing.  
  Now to spend all this effort and time arguing as I have seen, we have come
no closer to defining our goal as when we started.  I see many assumptions 
being made and no one is making the same ones as the previous person.
  If we say that a machine that acts like a human in response can not be
intelligent because it does not "UNDERSTAND" is rather flimsy because we
have no way of measuring understanding other than the responses givin to 
a purposed set of questions or such.  We must imply the same standards
on the machine we create as we do on the children we create.
  By defining out anything that could possibly be artificail intelligence
we step over the argument and no conclusions can be made from it.
    Drew Lehman
    Lehman_ds@mike.lrc.edu


