From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!tdatirv!sarima Thu Feb 20 15:22:22 EST 1992
Article 3892 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!uunet!tdatirv!sarima
>From: sarima@tdatirv.UUCP (Stanley Friesen)
Newsgroups: comp.ai.philosophy
Subject: Re: Strong AI and Panpsychism
Message-ID: <433@tdatirv.UUCP>
Date: 20 Feb 92 00:49:30 GMT
References: <6182@skye.ed.ac.uk> <1992Feb13.234630.1092@psych.toronto.edu> <419@tdatirv.UUCP> <1992Feb16.195141.15253@psych.toronto.edu>
Reply-To: sarima@tdatirv.UUCP (Stanley Friesen)
Organization: Teradata Corp., Irvine
Lines: 73

In article <1992Feb16.195141.15253@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:
|In article <419@tdatirv.UUCP> sarima@tdatirv.UUCP (Stanley Friesen) writes:
|>I would say the reason we think table look-up is insufficient is that
|>it is too *simple*.  ...
|>This is *not* true of a table look-up system.
|
|But this may simply be an accident of evolution.  There are many ways in
|which evolved entities do things which are not the simplest or most
|efficient way, but which are the only ways available to them given
|their evolutionary history.  

True enough, and I would scarcely require something precisely as complex as
a human mind.

What I think I was getting at is that all imaginable systems that I would
consider similar to the human mind in the relevant ways necessary to be called
intelligent are of a rather high order of complexity.

This is not actually conclusive of course, it is just an intuition if you will.

One of the areas involved is dynamic adaptation to new situations, another
is the matter of modelling (discussed below).  These seem to me to require
a substantial complexity.
|
|>So, as a first approximation I would say that an intelligent system has
|>to be a complex system that computes at least some of its responses on
|>the fly.
|
|Again, see above.

I guess it just seems to me that these are the relevent 'defining'
characteristics, and are thus minimal.  (Again, I know this is arguable).

|>Well, I would say that an NN has beliefs in the sense that it classifies
|>its inputs acording to overall patterns.  Each pattern class then represents
|>a set of beliefs about ctaegories of inputs.
|>
|>And in a standard knowledge-base type expert system the contenst of the
|>knowledge base are the beliefs.
|
|These are *very* different ways of representing beliefs.  What makes them
|"appropriate" ways?  And what do they have in common which makes them
|both "beliefs", apart from behaviour?

I would say that they both represent *models*, and that beliefs must take
a form that can be treated as a model.  (see below).

|>I guess it is a matter of having data structures that map onto models
|>of some entity or system.  A table-lookup has no modeling type data
|>structures.
|
|I'd like to see this fleshed out more, specifically why is it not possible
|to interpret a table-lookup as having a model.

I think of models in terms of having a correlated structure with regard
to the entity or system modelled.  I find it hard to see any sort of
representational structure in a flat transition table.

More formally it does not have a *mapping* from internal data structures
to entities in the sytem being modelled. (At least ruling out gerrymandered
mappings).

I think both of the other examples do have some sort of 'natural' mapping
between 'beliefs' and the systems about which the beliefs are held.


As I warned at the beginning, this is all mostly very sketchy, and informal.
It would take a good deal of research in cognitive science to flesh it out
properly.
-- 
---------------
uunet!tdatirv!sarima				(Stanley Friesen)



