From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael Thu Feb 20 15:21:17 EST 1992
Article 3786 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!psych.toronto.edu!michael
>From: michael@psych.toronto.edu (Michael Gemar)
Subject: Re: Strong AI and Panpsychism
Message-ID: <1992Feb16.195141.15253@psych.toronto.edu>
Organization: Department of Psychology, University of Toronto
References: <6182@skye.ed.ac.uk> <1992Feb13.234630.1092@psych.toronto.edu> <419@tdatirv.UUCP>
Date: Sun, 16 Feb 1992 19:51:41 GMT

In article <419@tdatirv.UUCP> sarima@tdatirv.UUCP (Stanley Friesen) writes:
>In article <1992Feb13.234630.1092@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
>|>Moreover, they don't have to be able to say exactly what 
>|>functional organization is right in order to be able to
>|>rule out extremes.  There's nothing in this to rule out
>|>*only* table lookup.  The idea is: table lookup is wrong;
>|>not: only table lookup is wrong.
>|
>|But *why* is table lookup an "extreme"?   This is the problem that I have.
>|There seems to be some sort of implicit assumption about what kind
>|of functional organization is required to generate belief.  What I
>|am trying to uncover is what that assumption is.  If there *isn't* one
>|any deeper than "well, it's just *obvious* that table lookup can't
>|generate beliefs," then you are merely ruling them out ad hoc.       
>
>Well, I can give my own off-the-cuff impression, since that is what you
>seem to want.  (This is *not* a finished concept, just some musings).

That's fine, that's all I was looking for.

>I would say the reason we think table look-up is insufficient is that
>it is too *simple*.  It is clear from psychology and neuroanatomy that
>the human mind is an incredibly complex entity composed of a bewildering
>array of interacting subcomponents that often behaves in unexpected ways.
>
>This is *not* true of a table look-up system.

But this may simply be an accident of evolution.  There are many ways in
which evolved entities do things which are not the simplest or most
efficient way, but which are the only ways available to them given
their evolutionary history.  

In addition, my interest is in how beliefs *can* be formed, not how
they actually are in humans, which is merely a subset of this.

>Also, at present, it seems likely that the human mind does not always
>precompute its responses, it generates them on the spot.  Again this
>is something that a table look-up system does not do, ever.
>
>So, as a first approximation I would say that an intelligent system has
>to be a complex system that computes at least some of its responses on
>the fly.

Again, see above.

>Now, as a practical matter, I do not believe that any *actualizable*
>table look-up system could actually fully mimic an intelligent system.

This may very well be.  To be honest, I am not convinced that is it
possible *in principle* either, but for the moment am assuming that it is.

>|>There isn't anything in simple table lookup that corresponds
>|>in a reasonable way to beliefs.  
>|
>|The same could be said for other architectures, depending on how you
>|define "reasonable" (where is a belief in a neural net?).  In what
>|way do other architectures have structures that correspond to beliefs more
>|"reasonably"? 
>
>Well, I would say that an NN has beliefs in the sense that it classifies
>its inputs acording to overall patterns.  Each pattern class then represents
>a set of beliefs about ctaegories of inputs.
>
>And in a standard knowledge-base type expert system the contenst of the
>knowledge base are the beliefs.

These are *very* different ways of representing beliefs.  What makes them
"appropriate" ways?  And what do they have in common which makes them
both "beliefs", apart from behaviour?

>|The above paragraph was merely indicating my reasoning so far.  As I indicate
>|above, I would be happy to consider any distinction you could provide WRT
>|table lookup and "real belief" architectures.  If you could provide me
>|with a clear distinction, then I would re-evaluate my thoughts on machine
>|belief.  But without someone to say where the above line of reasoning is
>|wrong, I have to believe it's right.
>
>I guess it is a matter of having data structures that map onto models
>of some entity or system.  A table-lookup has no modeling type data
>structures.

I'd like to see this fleshed out more, specifically why is it not possible
to interpret a table-lookup as having a model.


- michael



