From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!swrinde!gatech!cc.gatech.edu!terminus!centaur Tue Nov 26 12:31:47 EST 1991
Article 1526 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca rec.arts.books:10540 sci.philosophy.tech:1073 comp.ai.philosophy:1526
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!swrinde!gatech!cc.gatech.edu!terminus!centaur
>From: centaur@terminus.gatech.edu (Anthony G. Francis)
Newsgroups: rec.arts.books,sci.philosophy.tech,comp.ai.philosophy
Subject: Re: Daniel Dennett
Summary: AI cheerleading and some automata theory
Message-ID: <1991Nov23.214707.1663@cc.gatech.edu>
Date: 23 Nov 91 21:47:07 GMT
References: <1991Nov21.005355.5696@husc3.harvard.edu> <centaur.690849720@cc.gatech.edu> <1991Nov23.022628.5799@husc3.harvard.edu>
Sender: news@cc.gatech.edu
Followup-To: comp.ai.philosophy
Organization: College of Computing
Lines: 139

In article <1991Nov23.022628.5799@husc3.harvard.edu> zeleny@zariski.harvard.edu (Mikhail Zeleny) writes:
>Since you chose to elide and ignore my semantical arguments, I feel
>justified in doing the same to your AI cheerleading, noting only that I am...

I don't cheerlead. I'm on the team. But your point is taken. 

>familiar enough with most of the programs you mention, to the extent of
>knowing their ... failures...I want the same reliability as manifested by
>humans.  All contenders will be tested by a bona fide expert in visual
>pattern recognition and measured against a randomly selected 5-year-old.
>Put up, or shut up.

That's not the same criteria you used earlier - you asked "Where are the
great AI tools?", not for a duplicate of human performance. Very well, I 
believe that you'll find my response to that at the end of the same article
(e.g., the requisite computing power does not exist yet.) 

>AGF:
>>[argument about FSA < PDA < TM deleted]
>MZ:
>A brain has no infinite tape; nor have you.  Sorry, but you are limited to
>the finite state automata.  Also note that an infinite table is still a ...

The universe is, as far as we can tell, finite, and so your statement
(and the statement of another poster) is trivially true, since any FSA we
could build must have a finite number of states, any PDA must have a finite
stack size, and any Turing Machine must have a finite sized tape. Ok, that's
a given. But even finite-sized FSA's have limitations that finite-sized
PDA's and TM's don't.

Given a space constraint, I can design a fairly small PDA to accept the string 
(a^i)(b^i), that is, i a's followed by i b's. A finite PDA can only accept 
a string _of this form_ twice its length of its stack; a FSA can only accept
a string of size n, where n is its number of states. If we construct a
FSA with one state for every element of the PDA's stack plus the total
states of the PDA (oh, three or four tops) then it can only accept a string
of size i+4 - a string of little more than half the length of the PDA's string,
and hence it accepts a language little more than half the size. As the
complexity of the language rises, the limitations of the FSA become greater.

Given a space limitation, it is always possible to design a PDA or TM that will
accept a larger set of languages than a FSA built in the same space. If you're
going to argue on real-space limitations, that's fine ... but when you say
that I'm limited to a FSA, then you're doing a little handwaving, because
the FSA that can accept the language that I can might actually be 
unconstructable, given the same space limitations. In fact, if you're going
to claim (as you do below) that you have a theory of semantics that cannot
be implemented on a FSA, then show me a finite subset of that language 
(returning to real-world constraints, as you have asked me) and I'll capture 
it in a FSA.

>AGF:
>>In fact, a Turing machine can accept any language that can be specified
>>in any kind of formal system... 
>
>A Turing machine could only capture the standard model of the integers in a
>trivial sense of being able to enumerate all natural numbers; surely you
>must realize that it is incapable of enumerating all propositions of
>arithmetic that are true in that model; indeed, it's incapable to transcend
>whatever formalism its program is based on...

Which is what I said in my next paragraph.

>AGF:
>>[Argument about Godel deleted]
>>This, however, does not exempt the formal system T from having true statements
>>that it cannot prove, nor does it exempt the total system of S and T from
>>having true but unprovable statements. This is, essentially, the case of
>                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>>the mathematician operating with formal systems. The mathematician can 
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
>>determine the limitations of number theory, but this says nothing whatsoever
>>about the limitations of the mathematician or of the mathematician-theory
>>system. [overreaching hypothesis about Fermat's theorem deleted for space]
>
>You are confusing mathematical practice with formal manipulation of axioms.

No, I am not. I am merely noting that if mathematicians are reducible to highly
complex formal systems, the Godel hypothesis in no way contradicts their
ability to construct and analyze other formal systems. According to
Godel, there may be statements within this formal system which are true, but
unprovable, but these states may have no currently recognizable relation to the
observable performance of the system.

>AGF:
>>Current scientific (rationalist) theory posits no physical operation which 
>>cannot be computed by a Turing machine. For a good argument why, I refer you 
>>to Penrose's _The Emperor's New Mind_... 
>Funny, my copy of Penrose, on pages 404ff., makes the claim that the brain
>operates non-algorithmically; never mind, it must be a misprint.

That's his _claim_. His explanation about how the brain actually functions
(according to our current knowledge), however, backs up my claim about current 
theory and _its_ algorithmic claims. That's _why_ he was positing his 
quantum-mechanical theory - which, while it does go out on a limb, is one of 
the boldest and gutsiest arguments for his position I have ever heard.

>AGF:
>>Arguments about expressing "denotational semantics" in a FSA miss the point;
>>if it can be formally specified _at all_, it can be specified in a Turing
>>machine (and to turn this around, if it can be expressed in a Turing machine
>>then we know we can do it with the neural machinery of the brain, since
>>the brain is complex enough to simulate a Turing machine). 
>
>I can define a great many things that a Turing machine, much less a FSA,
>simply can't compute; one example is the set of all real numbers not
>computable by a Turing machine.  Likewise, as I have argued earlier, I can
>give you a pretty good theory of meaning that can't be simulated by a FSA.

Perhaps I missed this theory of meaning - could you produce it, please?
Maybe it was on one of the earliest messages on this thread. You can mail it
if you don't want to post it again ...

>AGF:
>>[Argument against the primacy of philosophy deleted] 
>
>Modern semantics originates with Frege, without whose old, flawed, faulty,
>incorrect intuitions about our cognitive systems you would still be using a
>slide rule; perhaps, in this respect only, the world would be a much better
>place.  What you are trying to do above is condemn without understanding;
>this renders any attempt to hold a reasoned discussion with you quite
>pointless.  End of this discourse.

Interesting. I'll have to look up this Frege. I thought Boole, Babbage and
a host of engineers and other scientists were responsible to the developments
that make this message possible, and I wasn't aware that there was some
overarching semantic theory that made their efforts possible. However, if you
want to stop discussing, that's fine. I believe we are arguing from a different
set of core beliefs about what is relevant to the issue, as happens all too
often. Thanks for replying to my message - you made me think, which is why
I engage in these debates anyway.
--
Anthony G. Francis, Jr.  - Georgia Tech {Atl.,GA 30332}
Internet Mail Address: 	 - centaur@cc.gatech.edu
UUCP Address:		 - ...!{allegra,amd,hplabs,ut-ngp}!gatech!prism!gt4864b
-------------------------------Quote of the post------------------------------- 
"Just take the money and run, and if they give you a hassle, blow them away."
	- collected in a verbal protocol for the Bankrobber AI Project
-------------------------------------------------------------------------------


