From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!jupiter!morgan.ucs.mun.ca!nstn.ns.ca!news.cs.indiana.edu!rutgers!cs.utexas.edu!swrinde!gatech!cc.gatech.edu!terminus!centaur Tue Nov 26 12:32:03 EST 1991
Article 1554 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!jupiter!morgan.ucs.mun.ca!nstn.ns.ca!news.cs.indiana.edu!rutgers!cs.utexas.edu!swrinde!gatech!cc.gatech.edu!terminus!centaur
>From: centaur@terminus.gatech.edu (Anthony G. Francis)
Newsgroups: comp.ai.philosophy
Subject: Re: Daniel Dennett
Message-ID: <centaur.691029638@cc.gatech.edu>
Date: 25 Nov 91 00:40:38 GMT
References: <centaur.690849720@cc.gatech.edu> <1991Nov23.022628.5799@husc3.harvard.edu> <1991Nov23.214707.1663@cc.gatech.edu> <39701@dime.cs.umass.edu>
Sender: news@cc.gatech.edu
Organization: Georgia Tech College of Computing
Lines: 135

yodaiken@chelm.cs.umass.edu (victor yodaiken) writes:

>In article <1991Nov23.214707.1663@cc.gatech.edu> centaur@terminus.gatech.edu (Anthony G. Francis) writes:
>>>AGF:
>>>>[argument about FSA < PDA < TM deleted]
>>>MZ:
>>>A brain has no infinite tape; nor have you.  Sorry, but you are limited to
>>>the finite state automata.  Also note that an infinite table is still a ...
>>
>>(and the statement of another poster) is trivially true, since any FSA we
>>could build must have a finite number of states, any PDA must have a finite
>>stack size, and any Turing Machine must have a finite sized tape. Ok, that's
>>a given. But even finite-sized FSA's have limitations that finite-sized
>>PDA's and TM's don't.
>
>This argument seems to be quite widespread, and I'm curious to see if you
>can shed some light on it. PErhaps you could define "FSA" in such a way
>as to make the distinction between FSAs and "finite-sized TM's" clear.
>To my, no doubt naive, understanding, a Turing machine limited to a finite
>tape *is* a FSA, or, at the very least, is a representation of an FSA.

Ok, I'll give it my best shot - try not to laugh _too_ much. To my, no doubt 
equally naive, understanding, a Turing Machine limited to a finite tape _can 
be represented_ by an finite state automata, _but_ that finite state
automata cannot be realized under the same constraints as those imposed
on the Turing Machine. Here goes a try:

A finite state automata can be defined as a set of states, a set of 
transitions from state to state based on input symbols, a start state and
an end state. This definition can be extended in numerous ways, but true
FSA's can accept regular languages.

A Turing Machine, is, essentially, a modified FSA with the ability to reread
its input, or erase or change that input. It is a finite state machine attached
to a tape divided into cells which contain symbols or blanks; at most a finite
number of symbols are nonblank at any one time. This definition can be extended
in numerous ways, but true TM's can accept recursively enumerable languages.

Then, the difference is that a Turing Machine has a memory and a FSA does not,
and a Turing Machine can accept a set of languages which includes as a subset
the set of languages accepted by a FSA. In theory.

In practice, computers do not have unbounded auxiliary storage, which is the
source of the claim that all computers (and thus AI's as well) can be
represented by a finite state automata.  For instance, this SPARCstation 2 has 
about 16 meg of memory and I believe a 100 meg hard disk, or some equivalent 
number. Ignoring registers and other memories, this computer has just under
one billion bits of storage. Since this storage is finite, this computer
can be represented by a FSA.

However, that FSA would have 2^1,000,000,000 states, which is roughly
10^300,000,000 - that's a one with three hundred million zeroes after it,
roughly ten to the three millionth power larger than a googol. This FSA cannot 
be constructed _as a FSA_, but it can be constructed as a Turing Machine.
(In fact, the Von Neumann architecture used in modern computers is an 
implementation of a Universal Turing Machine, but that's beside the point).

My whole point is that invoking the finite storage claim as a proof that
my brain is a FSA is essentially the introduction of real-world constraints
into a theoretical problem. That's acceptable - but if the argument is that
this SPARC isn't a Turing machine because it doesn't have an infinite tape, 
then I want to turn that around and point out that you can't build a FSA that
represents this machine given those same space constraints. They are in theory
mathematically equivalent, yes, and with that point I cannot argue. 
But I have a basic problem with imposing physical constraints on 
Turing Machines, and then proving that this class of Turing Machines is
equivalent to a class of FSA's that don't meet those same constraints.

Ok, space is one real-world consideration. Time is another.  A FSA for this 
SPARC cannot be constructed without foreknowledge. I must know in advance
all the possible states the system may get in and provide the appropriate
response. For a SPARC, this is `easy' - it simply requires enumerating all the
states that the system can be in and computing what the system response
would be. This is physically impossible, but let's ignore that for now. Where
this becomes relevant is the table-lookup architecture - someone suggested 
that a simple table lookup FSA could simulate all conversations of up to
time n and thus pass the Turing Test for time n. The problem with this is
that this requires foreknowledge of _all possible conversations_, including
those that have not existed yet - asking "So what do you think of Anita Hill?"
sometime last year, for instance.  Enumerating all of these conversations
is _extremely_ difficult if not impossible without some kind of theory
of what responses are valid - and that's cognitive science.

In fact, the conversations needed to pass the Turing Test may change with time,
so the FSA will need to change its output based on other variables which are 
not part of the conversation. This FSA _cannot_ be constructed in this 
universe, but a Turing Machine running an AI program conceivably could _if_
AI is possible, which it very well may not be. If we take the theoretical
time-n FSA and set n to, say, 80 years, and then convert that to a Turing
Machine that uses, instead of a massive lookup table, an algorithm that can
read, write and change both internal and external data, then we have something
that looks _very_ suspiciously like an intelligent agent ...  

Also note that a FSA for this SPARC would have to be totally reconstructed
if I added a 300M disk, but since this machine is modeled on an Universal
Turing Machine, it is exactly equivalent to adding more tape.

>>[argument about PDA <> FSA for finite sizes deleted]
>This argument seems to me to confuse notation with denotation. 
>A similar argument would show that base 2 notation is "less powerful" than
>base 10 notation. 

True. Base 2 notation is not "less powerful" from a mathematical standpoint -
it can represent all of the same numbers. But if we are talking about an
implemenation, then base 2 would be "less powerful" than base 10 if we had 
ten-state switches, because an "equivalent" amount of storage of base 10
numbers could hold a larger set of numbers than the same amount of base 2
numbers. If we convert both base 10 and base 2 to base 2 in the computer's
internal representation, then we gain nothing by using base 10. One difference
between the FSA-TM distinction and this one, however, is that theoretical
FSA's and theoretical TM's don't capture the same set of languages, while
base 2 and base 10 capture the same set of numbers.

I guess my point is, if you (that's the general you, and not _you_ you in
particular, sorry if I've been sloppy :-) ) are going to invoke real world
constraints then _invoke them_. On the one hand, it is perfectly legitimate to 
impose space constraints on FSA's, PDA's, and TM's as a means of simulating 
what real-world machines are possible. It sounds like a stimulating area of 
theory research, but that's not my field.

On the other hand, claiming that a realizable TM can be reduced to an 
unrealizable FSA is not only theoretically shaky, it also dodges
the question of what real-world systems actually use. Of course a theory of
notation and denotaion can't be implemented on a FSA. But a subset of that
theory - that might apply to finite human beings - might be implemented
on a Turing Machine.
-Anthony
--
Anthony G. Francis, Jr.  - Georgia Tech {Atl.,GA 30332}
Internet Mail Address: 	 - centaur@cc.gatech.edu
UUCP Address:		 - ...!{allegra,amd,hplabs,ut-ngp}!gatech!prism!gt4864b
-------------------------------Quote of the post------------------------------- 
"Just take the money and run, and if they give you a hassle, blow them away."
	- collected in a verbal protocol for the Bankrobber AI Project
-------------------------------------------------------------------------------


