From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!wupost!darwin.sura.net!ukma!memstvx1!langston Tue Mar 24 09:57:33 EST 1992
Article 4617 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:4617 sci.philosophy.tech:2372
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!wupost!darwin.sura.net!ukma!memstvx1!langston
>From: langston@memstvx1.memst.edu
Newsgroups: comp.ai.philosophy,sci.philosophy.tech
Subject: Re: Causes and Goals
Message-ID: <1992Mar19.181821.1631@memstvx1.memst.edu>
Date: 20 Mar 92 00:18:21 GMT
References: <1992Mar17.110436.9937@husc3.harvard.edu> <1992Mar18.044516.28882@nuscc.nus.sg> <6429@skye.ed.ac.uk> <1992Mar18.190801.29917@news.media.mit.edu>
Organization: Memphis State University
Lines: 46

In article <1992Mar18.190801.29917@news.media.mit.edu>, minsky@media.mit.edu (Marvin Minsky) writes:

   [...a lot of stuff deleted...]

> Our mental models also work in social realms, to answer questions
> like, @i["Who owns that car?"]  or @i["Who allowed you to park it
> there?"] However, to understand questions like these, we have to ask
> what people mean by "who" - and the answer is that @i[we make mental
> models of people, too.]  In order for Mary to "know" about Jack's
> dispositions, motives, and possessions, Mary has to build inside her
> head some structure to help answer those kinds of questions - and that
> structure will constitute her mental model of Jack.  J psychological
> queries like @i["What are Jack's ideals?"]  Quite possibly, Mary's
> model of Jack will be able to produce more accurate answers to such
> questions than Jack himself could produce.  For people's mental models
> of their friends are often better, in certain respects, than their
> mental models of themselves.
> 
> We all make models of ourselves, and use them to predict which sorts
> of things we'll later be disposed to do.  Naturally, our models of
> ourselves will often provide us with wrong answers, because they
> aren't really faultless ways to see ourselves, they're merely
> self-made answering- machines.


  Okay, but just how OFTEN do we (i.e., ourselves) actually make use of
these self-referential models, except maybe to communicate intention to
other agents, who then use this information to update THEIR models of US
(this, of course, assumes we all agree that we build complex mental models).
If we use these models to OUR benefit at all, it escapes me what good they
are, except maybe in a planning situation (but then, if one's life is so
rigid as to adhere to plans made at some prior time regardless of intervening
events, one deserves the consequences).  Just how far 'down the road' are
we talking about in terms of predicting future actions?  Except as a default
over a very brief period of time, assuming no change whatsoever in the agent
or its environment (try pulling THAT off!), these 'predictions' would be close
to useless, wouldn't they?
 
-- 

Mark C. Langston                                  "What concerns me is not the
Psychology Department                              way things are, but rather
Memphis State University                           the way people think things
LANGSTON@MEMSTVX1.MEMST.EDU                        are."     -Epictetus

     "...a brighter tomorrow?!?  How about a better TODAY?"  -me


