From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!qt.cs.utexas.edu!cs.utexas.edu!news Sun Dec  1 13:05:45 EST 1991
Article 1661 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca sci.philosophy.tech:1171 comp.ai.philosophy:1661
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!qt.cs.utexas.edu!cs.utexas.edu!news
>From: turpin@cs.utexas.edu (Russell Turpin)
Newsgroups: sci.philosophy.tech,comp.ai.philosophy
Subject: Finite-state automata and models  (was: consciousness)
Followup-To: sci.philosophy.tech,comp.ai.philosophy
Date: 27 Nov 91 04:09:57 GMT
Organization: U Texas Dept of Computer Sciences, Austin TX
Lines: 67
Message-ID: <kj66klINN9vn@cs.utexas.edu>
References: <1991Nov26.135953.5926@husc3.harvard.edu>
Summary: What is a model?

-----
In article <1991Nov26.212920.5939@husc3.harvard.edu> zeleny@zariski.harvard.edu (Mikhail Zeleny) writes:
> Given that an FSA is inherently incapable of modeling itself, 
> how can we expect an AI theorist to come up with a model of
> his own intellectual processes? 

Surely Mr Zeleny's criticism of AI does not boil down to this
easily resolved conundrum?  Nothing in AI suggests that an
intelligent individual must contain a *complete* model of 
its own behavior.  

I know of two senses in which modeling intelligence has been
viewed as essential to AI.  First, it is frequently argued that
intelligence requires some internal model of self and world.  But
this model is necessarily abstract, ie, it leaves out some
details.  To take an easy example, when I estimate the effort
required of me to perform some task, I deal with an internal
model of how I work, the difficulty of various tasks, etc.  But I
do not need to know the state of each of my neurons!  Indeed,
dealing with such unnecessary detail would be a hindrance.  As
modeling experts frequently point out, the key to a useful model
lies in the finding the level of abstraction that is required for
its purpose.  If *complete* detail is required, it is usually
simpler to deal with the system itself rather than a model. 

In a second way, it might be argued that a complete model is
required.  The argument runs thusly: if intelligence can be
implemented by a machine, then it can be modeled in full by a
machine.  But even if one accepts this argument, it does NOT
follow that any machine that implements intelligence must be
able to model itself.  At most, it means that a second machine
is possible (in some sense) that models the first.  And in
fact, any FSA can be modeled by in full by some other FSA.  
Conundrum resolved.  (We must also be careful of the sense of
possibility used above.  Pragmatic issues may intervene without
challenging the theoretical claim.  There may be real machines
that function, in the important respects with which we are
concerned, as FSA's, but that we cannot *individually* model
because, as a matter of technology, we are unable to take them
apart without losing some of their information content.)

-----
While I must confess that I do not understand all of Mr Zeleny's
arguments, it seems to me that he occasionally leaves out
crucial details, leaving room for equivocation.  Perhaps there
is some sense in which AI requires a model of intelligence that
precludes an FSA implementation of intelligence.  But I do not
know it, and in my view, Mr Zeleny has done nothing to show us
what this sense is.

Similarly, I do not in what sense humans perform an infinite
recursion.  I know I do not know in full what it means, in all
cases, for some utterance to mean something.  Indeed, my reading
of philosophy -- which I confess is much less than Mr Zeleny's --
leads me to believe that this remains an open issue.  More
importantly than what philosophers do, people unstudied in
philosophy succeed in denoting without having first fleshed out a
theory of meaning.  Where is *their* infinite recursion?

Perhaps there are sound arguments here which I am too unlearned
to see, and were I more versed in the background literature, I
would have no problem filling in the details that Mr Zeleny
omits.  But if so, I suspect that there are other participants
of s.p.t who find themselves in the same boat regarding Mr
Zeleny's various arguments.

Russell


