From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!wupost!psuvax1!hsdndev!husc-news.harvard.edu!zariski!zeleny Tue Nov 26 12:31:45 EST 1991
Article 1522 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca rec.arts.books:10513 sci.philosophy.tech:1068 comp.ai.philosophy:1522
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!wupost!psuvax1!hsdndev!husc-news.harvard.edu!zariski!zeleny
>From: zeleny@zariski.harvard.edu (Mikhail Zeleny)
Newsgroups: rec.arts.books,sci.philosophy.tech,comp.ai.philosophy
Subject: Re: Daniel Dennett
Message-ID: <1991Nov23.022628.5799@husc3.harvard.edu>
Date: 23 Nov 91 07:26:24 GMT
References: <32905@uflorida.cis.ufl.EDU> <1991Nov21.005355.5696@husc3.harvard.edu> <centaur.690849720@cc.gatech.edu>
Organization: Dept. of Math, Harvard Univ.
Lines: 169
Nntp-Posting-Host: zariski.harvard.edu

Since you chose to elide and ignore my semantical arguments, I feel
justified in doing the same to your AI cheerleading, noting only that I am
familiar enough with most of the programs you mention, to the extent of
knowing their very real failures.  As an example, I offer you visual
pattern recognition.  Show me a program that can do the thing nearly every
normal human child can: reliably recognize a human face.  No "experimental"
designs will be accepted: I want the same reliability as manifested by
humans.  All contenders will be tested by a bona fide expert in visual
pattern recognition and measured against a randomly selected 5-year-old.
Put up, or shut up.

In article <centaur.690849720@cc.gatech.edu> 
centaur@terminus.gatech.edu (Anthony G. Francis) writes:

>zeleny@zariski.harvard.edu (Mikhail Zeleny) writes:

MZ:
>>charlatanic claim that the AI software is in principle different from "some
>>simple table-lookup architecture" (of course, all Turing machine or FSA
>>programs *are* instances of some simple table-lookup architecture), the
>>question to ask is whether such a system can be finite, while still
>>implementing the necessary semantical knowledge.  As can be readily seen
>>from the above, this is most manifestly not the case.
>
>>For an analogous example, consider the integers.  It's well-known that no
>>complete recursive axiomatization of elementary arithmetic can be given;
>>furthermore, the axioms of the first-order PA are not even categorical,
>>i.e. they fail to characterize their models up to isomorphism.  In spite of
>>all that, certain refractory Stanford scientists notwithstanding, human
>>mathematicians seem to have no difficulty in operating with semantical
>>notions like that of the standard model of the natural numbers, which
>>inherently can't be captured by a FSA.

AGF:
>I have some problems with this. First, finite state automata are not
>as powerful as (are not capable of accepting as large a class of languages
>as) push-down automata, which are not as powerful as Turing machines. 
>A Turing machine is not a FSA - it is much more powerful. Admittedly, PDA's
>and Turing machines use FSA's as part of their mechanisms, but it's a bit
>silly to call them simple table-lookup mechanisms when all of them can
>accept infinite languages.

A brain has no infinite tape; nor have you.  Sorry, but you are limited to
the finite state automata.  Also note that an infinite table is still a
table, and that Turing Machines are certainly incapable of accepting
infinite languages as defined by Alfred Tarski and developed by Carol Karp;
if they could do so, then your next statement would have been true in a
non-trivial sense.

AGF:
>In fact, a Turing machine can accept any language that can be specified
>in any kind of formal system. A FSA could never "capture" the standard model
>of natural numbers, but a Turing machine could. 

A Turing machine could only capture the standard model of the integers in a
trivial sense of being able to enumerate all natural numbers; surely you
must realize that it is incapable of enumerating all propositions of
arithmetic that are true in that model; indeed, it's incapable to transcend
whatever formalism its program is based on...

AGF:
>An appropriate response at this point is that Godel's results inform us
>that there are inherent limitations in any such formal systems, for instance
>that for any such implementation there will be true statements which are
>unprovable. At this point, Penrose and others have held up the fact that
>mathematicians can see and recognize this limitation, and can operate 
>despite it, and therefore they must represent a system more powerful than
>that of a Turing machine.

... and one consequence of G\"odel's theorem is that the informal notion of
mathematical proof is guaranteed to transcend every human formalization
attempt.

AGF:
>This is a basic misapplication of Godel's results. Godel's results do not
>imply that a formal system cannot be analyzed. It is possible to examine
>the results of a given formal system S from within a second formal system
>T, and to find statements in S that are true, but not provable within S.
>In effect, this is exactly what Godel's proof does - he uses one formal
>system to construct a statement in another that is true in that system,
>but unprovable. Admittedly, the unprovable statement would be prohibitively
>large to actually state textually, given the method Godel used to arrive
>at his result, but the result _could_ be constructed.
>
>This, however, does not exempt the formal system T from having true statements
>that it cannot prove, nor does it exempt the total system of S and T from
>having true but unprovable statements. This is, essentially, the case of
                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>the mathematician operating with formal systems. The mathematician can 
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>determine the limitations of number theory, but this says nothing whatsoever
>about the limitations of the mathematician or of the mathematician-theory
>system. (Interestingly enough, this suggests something about certain
>intractably unsolvable problems, like Fermat's.) We simply don't know enough
>about mathematicians to determine their axioms and formal operations, or
>even if they do have formal operations (which, in a very oblique way, is
>what this whole argument is about). 

You are confusing mathematical practice with formal manipulation of axioms.
If this formalist thesis were true, there would be no point in arguing: the
AI philosophy of mathematics essentially amounts to the same formalism that
most mathematicians have rejected since the early thirties.  On the
contrary, actual mathematical practice consists in much histrionic
handwaving and appeals to intuition; to paraphrase Dumarsais, you could
hear more metaphors in a single mathematical seminar, then you would in an
entire English department.  Moreover, what do you think is the point of
arguing over whether the Continuum Hypothesis is true or false nearly
thirty years since it's been proven independent from the axioms of ZFC?

AGF:
>Current scientific (rationalist) theory posits no physical operation which 
>cannot be computed by a Turing machine. For a good argument why, I refer you 
>to Penrose's _The Emperor's New Mind_. As such, if the brain operates
>purely as a physical device, then it must be simulatable by a Turing machine.
>Therefore, it can be simulated on a (sufficiently large) computer. 

Funny, my copy of Penrose, on pages 404ff., makes the claim that the brain
operates non-algorithmically; never mind, it must be a misprint.  In any
case, the real question is not whether a brain lends itself to a FSA
simulation, but whether a mind does.

AGF:
>Arguments about expressing "denotational semantics" in a FSA miss the point;
>if it can be formally specified _at all_, it can be specified in a Turing
>machine (and to turn this around, if it can be expressed in a Turing machine
>then we know we can do it with the neural machinery of the brain, since
>the brain is complex enough to simulate a Turing machine). 

I can define a great many things that a Turing machine, much less a FSA,
simply can't compute; one example is the set of all real numbers not
computable by a Turing machine.  Likewise, as I have argued earlier, I can
give you a pretty good theory of meaning that can't be simulated by a FSA.

AGF:
>I still believe that arguments about symbols and denoting and meaning, at
>least as they are being formulated in the context of this discussion,
>just don't mean anything.  These concepts are based on old, flawed, faulty, 
>incorrect _intuitions_ about our cognitive systems, built by philosophers who 
>were probably as accurate recreating their own innards as they would be giving
>an eyewitness report of a crime scene (i.e., just like the rest of us, not 
>accurate at all). There is no reason to believe that our popular or
>philosophical concepts of the mind will hold up any better than the 
>"naive physics" that each human being learns through observing the world
>with his or her own limited sensors. Aristotle's physics was wrong; people's
>intuitions about the physical world are wrong, as has been proven by
>psychological experiment; there is no reason to believe that philosophy or
>popular cognitive intuitions have any special validity.

Modern semantics originates with Frege, without whose old, flawed, faulty,
incorrect intuitions about our cognitive systems you would still be using a
slide rule; perhaps, in this respect only, the world would be a much better
place.  What you are trying to do above is condemn without understanding;
this renders any attempt to hold a reasoned discussion with you quite
pointless.  End of this discourse.

'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`
`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'
: Qu'est-ce qui est bien?  Qu'est-ce qui est laid?         Harvard   :
: Qu'est-ce qui est grand, fort, faible...                 doesn't   :
: Connais pas! Connais pas!                                 think    :
:                                                             so     :
: Mikhail Zeleny                                                     :
: 872 Massachusetts Ave., Apt. 707                                   :
: Cambridge, Massachusetts 02139                                     :
: (617) 661-8151                                                     :
: email zeleny@zariski.harvard.edu or zeleny@HUMA1.BITNET            :
:                                                                    :
'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`
`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'


