From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!tdatirv!sarima Thu Feb 20 15:21:02 EST 1992
Article 3760 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!tdatirv!sarima
>From: sarima@tdatirv.UUCP (Stanley Friesen)
Newsgroups: comp.ai.philosophy
Subject: Re: Strong AI and Panpsychism
Message-ID: <419@tdatirv.UUCP>
Date: 14 Feb 92 21:20:45 GMT
References: <6171@skye.ed.ac.uk> <1992Feb13.014116.9941@psych.toronto.edu> <6182@skye.ed.ac.uk> <1992Feb13.234630.1092@psych.toronto.edu>
Reply-To: sarima@tdatirv.UUCP (Stanley Friesen)
Organization: Teradata Corp., Irvine
Lines: 69

In article <1992Feb13.234630.1092@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
|>Moreover, they don't have to be able to say exactly what 
|>functional organization is right in order to be able to
|>rule out extremes.  There's nothing in this to rule out
|>*only* table lookup.  The idea is: table lookup is wrong;
|>not: only table lookup is wrong.
|
|But *why* is table lookup an "extreme"?   This is the problem that I have.
|There seems to be some sort of implicit assumption about what kind
|of functional organization is required to generate belief.  What I
|am trying to uncover is what that assumption is.  If there *isn't* one
|any deeper than "well, it's just *obvious* that table lookup can't
|generate beliefs," then you are merely ruling them out ad hoc.       

Well, I can give my own off-the-cuff impression, since that is what you
seem to want.  (This is *not* a finished concept, just some musings).

I would say the reason we think table look-up is insufficient is that
it is too *simple*.  It is clear from psychology and neuroanatomy that
the human mind is an incredibly complex entity composed of a bewildering
array of interacting subcomponents that often behaves in unexpected ways.

This is *not* true of a table look-up system.

Also, at present, it seems likely that the human mind does not always
precompute its responses, it generates them on the spot.  Again this
is something that a table look-up system does not do, ever.

So, as a first approximation I would say that an intelligent system has
to be a complex system that computes at least some of its responses on
the fly.


Now, as a practical matter, I do not believe that any *actualizable*
table look-up system could actually fully mimic an intelligent system.

|>There isn't anything in simple table lookup that corresponds
|>in a reasonable way to beliefs.  
|
|The same could be said for other architectures, depending on how you
|define "reasonable" (where is a belief in a neural net?).  In what
|way do other architectures have structures that correspond to beliefs more
|"reasonably"? 

Well, I would say that an NN has beliefs in the sense that it classifies
its inputs acording to overall patterns.  Each pattern class then represents
a set of beliefs about ctaegories of inputs.

And in a standard knowledge-base type expert system the contenst of the
knowledge base are the beliefs.

|The above paragraph was merely indicating my reasoning so far.  As I indicate
|above, I would be happy to consider any distinction you could provide WRT
|table lookup and "real belief" architectures.  If you could provide me
|with a clear distinction, then I would re-evaluate my thoughts on machine
|belief.  But without someone to say where the above line of reasoning is
|wrong, I have to believe it's right.

I guess it is a matter of having data structures that map onto models
of some entity or system.  A table-lookup has no modeling type data
structures.



Of course, this does not mean that an intelligent machine is atually
possible, just that it cannot be ruled out a priori.
-- 
---------------
uunet!tdatirv!sarima				(Stanley Friesen)


