From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!cs.utexas.edu!rutgers!ub!galileo.cc.rochester.edu!rochester!kodak!ispd-newsserver!psinntp!sunic!seunet!kullmar!pkmab!ske Tue Apr  7 23:23:08 EDT 1992
Article 4810 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!cs.utexas.edu!rutgers!ub!galileo.cc.rochester.edu!rochester!kodak!ispd-newsserver!psinntp!sunic!seunet!kullmar!pkmab!ske
>From: ske@pkmab.se (Kristoffer Eriksson)
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence Testing
Message-ID: <6717@pkmab.se>
Date: 25 Mar 92 23:27:13 GMT
References: <1992Feb27.185327.2687@oracorp.com>
Organization: Peridot Konsult i Mellansverige AB, Oerebro, Sweden
Lines: 44

In article <1992Feb27.185327.2687@oracorp.com> daryl@oracorp.com writes:
>To me, the main elements of thinking are (a) memory, and (b) a
>mechanism for producing future behavior influenced by memory and
>inputs.

I think that an important part of thinking is the manipulation of a
world-model that the thinking entity is maintaining.

Having a memory may be just an assessory (although necessary) for storing
that model. And thinking may be a mechanism involved in producing future
behavior, but is not uniquely determined by that characterization. There
are other ways to produce future behavior.

I think having a world-model (and preferably a self-model) may answer some
questions about what systems would "think" in a more satisfactory way than
purely behavioral arguments do (supposing AI is possible). If we say that
a system that can "think" must internally maintain a non-trivial world-model,
then a simple lookup table will be ruled out, since it usually only contains
data about the output behaviour, and nothing that could be considered to
represent a world-model except in the trivial sense of a literal recording
of past inputs. Another system that performs more complicated computations,
and contains something internally that corresponds in suitable ways to
the external reality such that it could be a world-model, would still pass
the requirements.

The simple lookup table may produce "intelligent behavior", but does not
actually "think" by itself.

Of course, to make this idea more rigorous, I would have to define "world-
model" and many other things better, and showing how "meaning" fits into
the picture would also help - and I wouldn't mind trying that at another
time - but for this moment, let me offer this idea only as a way to make
a distinction between systems with the same external behavior.

(Another way of doing the same, would be to count the world-model as an
output of the system, but then we would have two classes of outputs: those
that consist of external behaviour (conversation output and such), and
those that don't (purely formal outputs) and which you can't observe
externally.)

-- 
Kristoffer Eriksson, Peridot Konsult AB, Hagagatan 6, S-703 40 Oerebro, Sweden
Phone: +46 19-13 03 60  !  e-mail: ske@pkmab.se
Fax:   +46 19-11 51 03  !  or ...!{uunet,mcsun}!mail.swip.net!kullmar!pkmab!ske


