From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rutgers!gatech!cc.gatech.edu!terminus!centaur Tue Nov 26 12:31:37 EST 1991
Article 1511 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca rec.arts.books:10484 sci.philosophy.tech:1063 comp.ai.philosophy:1511
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rutgers!gatech!cc.gatech.edu!terminus!centaur
>From: centaur@terminus.gatech.edu (Anthony G. Francis)
Newsgroups: rec.arts.books,sci.philosophy.tech,comp.ai.philosophy
Subject: Re: Daniel Dennett (was Re: Commenting on the pos
Message-ID: <centaur.690849720@cc.gatech.edu>
Date: 22 Nov 91 22:42:00 GMT
References: <15018@castle.ed.ac.uk> <1991Nov19.183901.5640@husc3.harvard.edu> <32905@uflorida.cis.ufl.EDU> <1991Nov21.005355.5696@husc3.harvard.edu>
Sender: news@cc.gatech.edu
Organization: Georgia Tech College of Computing
Lines: 146

zeleny@zariski.harvard.edu (Mikhail Zeleny) writes:

>charlatanic claim that the AI software is in principle different from "some
>simple table-lookup architecture" (of course, all Turing machine or FSA
>programs *are* instances of some simple table-lookup architecture), the
>question to ask is whether such a system can be finite, while still
>implementing the necessary semantical knowledge.  As can be readily seen
>from the above, this is most manifestly not the case.

>For an analogous example, consider the integers.  It's well-known that no
>complete recursive axiomatization of elementary arithmetic can be given;
>furthermore, the axioms of the first-order PA are not even categorical,
>i.e. they fail to characterize their models up to isomorphism.  In spite of
>all that, certain refractory Stanford scientists notwithstanding, human
>mathematicians seem to have no difficulty in operating with semantical
>notions like that of the standard model of the natural numbers, which
>inherently can't be captured by a FSA.

I have some problems with this. First, finite state automata are not
as powerful as (are not capable of accepting as large a class of languages
as) push-down automata, which are not as powerful as Turing machines. 
A Turing machine is not a FSA - it is much more powerful. Admittedly, PDA's
and Turing machines use FSA's as part of their mechanisms, but it's a bit
silly to call them simple table-lookup mechanisms when all of them can
accept infinite languages.

In fact, a Turing machine can accept any language that can be specified
in any kind of formal system. A FSA could never "capture" the standard model
of natural numbers, but a Turing machine could. 

An appropriate response at this point is that Godel's results inform us
that there are inherent limitations in any such formal systems, for instance
that for any such implementation there will be true statements which are
unprovable. At this point, Penrose and others have held up the fact that
mathematicians can see and recognize this limitation, and can operate 
despite it, and therefore they must represent a system more powerful than
that of a Turing machine.

This is a basic misapplication of Godel's results. Godel's results do not
imply that a formal system cannot be analyzed. It is possible to examine
the results of a given formal system S from within a second formal system
T, and to find statements in S that are true, but not provable within S.
In effect, this is exactly what Godel's proof does - he uses one formal
system to construct a statement in another that is true in that system,
but unprovable. Admittedly, the unprovable statement would be prohibitively
large to actually state textually, given the method Godel used to arrive
at his result, but the result _could_ be constructed.

This, however, does not exempt the formal system T from having true statements
that it cannot prove, nor does it exempt the total system of S and T from
having true but unprovable statements. This is, essentially, the case of
the mathematician operating with formal systems. The mathematician can 
determine the limitations of number theory, but this says nothing whatsoever
about the limitations of the mathematician or of the mathematician-theory
system. (Interestingly enough, this suggests something about certain
intractably unsolvable problems, like Fermat's.) We simply don't know enough
about mathematicians to determine their axioms and formal operations, or
even if they do have formal operations (which, in a very oblique way, is
what this whole argument is about). 

Current scientific (rationalist) theory posits no physical operation which 
cannot be computed by a Turing machine. For a good argument why, I refer you 
to Penrose's _The Emperor's New Mind_. As such, if the brain operates
purely as a physical device, then it must be simulatable by a Turing machine.
Therefore, it can be simulated on a (sufficiently large) computer. 

Arguments about expressing "denotational semantics" in a FSA miss the point;
if it can be formally specified _at all_, it can be specified in a Turing
machine (and to turn this around, if it can be expressed in a Turing machine
then we know we can do it with the neural machinery of the brain, since
the brain is complex enough to simulate a Turing machine). 

I still believe that arguments about symbols and denoting and meaning, at
least as they are being formulated in the context of this discussion,
just don't mean anything.  These concepts are based on old, flawed, faulty, 
incorrect _intuitions_ about our cognitive systems, built by philosophers who 
were probably as accurate recreating their own innards as they would be giving 
an eyewitness report of a crime scene (i.e., just like the rest of us, not 
accurate at all). There is no reason to believe that our popular or
philosophical concepts of the mind will hold up any better than the 
"naive physics" that each human being learns through observing the world
with his or her own limited sensors. Aristotle's physics was wrong; people's
intuitions about the physical world are wrong, as has been proven by
psychological experiment; there is no reason to believe that philosophy or
popular cognitive intuitions have any special validity.


And on another note:
>Once again, having spent the past fourteen years developing software, I'll
>call you on this.  Where are those wonderful AI tools like the natural
>language understanding 
 SHRLDU, MARGIE, SAM, PAM, ASK-SAM, FRUMP, AQUA, HEARSAY, HEARSAY II, and BORIS 

>                       and translation programs, 
 In current use, translating Japanese into English.

>                                                 expert systems that would
>do medical and mechanical diagnostics as well as, or better than humans,
 MYCIN, which worked better than humans from the very start, was adapted to
 other domains, like Digital Equipment Corporations' system configurer, etc, ,

>visual pattern recognition systems, 
 Still in development; Minsky had programs doing this, and we're doing it
 here, to a limited extent, at Georgia Tech ...

>                                    robots that can walk on uneven terrain,
 This is a toughie, because it's computationally more complex than story
 understanding or chess playing, as well as requiring technological proficiency.

>and countless other things that were promised so long ago?  

 How about learning programs that learn subtraction in the way that human
  children do - even making the same mistakes? Ask John Anderson.

 How about meal-planning programs that can figure out what to do with leftover
  rice? Or can adapt a meat meal for vegetarians, and learn from its mistakes
  when broccoli gets soggy? Ask Janet Kolodner.

 How about route-planning programs that can learn maps and remember their past
  attempts? Ask Ashok Goel.

 How about story-writing programs? Electrical design programs? Architectural
  aids? Personalized newspapers? They're on the way. 

Why are these wonderful programs not in the workforce yet? Because the 
problem of AI is BIG. We only have the kernel of results that we need, and 
we have years, decades further to go. Some AI applications are out in the
field now - for instance, expert systems. Soon to follow will be case-based
systems and personal news apps. But the rest? Give it time. You have a million
times as much data in you as the average AI program. Computer systems can't
yet support the kinds of cognition that end-users want as applications, and
they won't until at least 2030-2050, if you believe Hans Moravec's 
projections (which may be off by a factor of 1000-1000000, according to 
some other estimates, which would push it back to 2070-2090, at current rates
of computer power growth). Once we have the power to support the applications,
then stand back - the consumers will demand AI. 


--
Anthony G. Francis, Jr.  - Georgia Tech {Atl.,GA 30332}
Internet Mail Address: 	 - centaur@cc.gatech.edu
UUCP Address:		 - ...!{allegra,amd,hplabs,ut-ngp}!gatech!prism!gt4864b
-------------------------------Quote of the post------------------------------- 
"Just take the money and run, and if they give you a hassle, blow them away."
	- collected in a verbal protocol for the Bankrobber AI Project
-------------------------------------------------------------------------------


