From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!thunder.mcrcim.mcgill.edu!snorkelwacker.mit.edu!news.media.mit.edu!minsky Tue Mar 24 09:57:35 EST 1992
Article 4621 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:4621 sci.philosophy.tech:2374
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!thunder.mcrcim.mcgill.edu!snorkelwacker.mit.edu!news.media.mit.edu!minsky
>From: minsky@media.mit.edu (Marvin Minsky)
Newsgroups: comp.ai.philosophy,sci.philosophy.tech
Subject: Re: Causes and Goals
Message-ID: <1992Mar20.023251.10939@news.media.mit.edu>
Date: 20 Mar 92 02:32:51 GMT
References: <6429@skye.ed.ac.uk> <1992Mar18.190801.29917@news.media.mit.edu> <1992Mar19.181821.1631@memstvx1.memst.edu>
Sender: news@news.media.mit.edu (USENET News System)
Organization: MIT Media Laboratory
Lines: 43
Cc: minsky

In article <1992Mar19.181821.1631@memstvx1.memst.edu> langston@memstvx1.memst.edu writes:
>In article <1992Mar18.190801.29917@news.media.mit.edu>, minsky@media.mit.edu (Marvin Minsky) writes:
>
>   [...a lot of stuff deleted...]
>
>> Our mental models also work in social realms, to answer questions
>> like, @i["Who owns that car?"]  or @i["Who allowed you to park it
>> there?"] However, to understand questions like these, we have to ask
>> what people mean by "who" - and the answer is that @i[we make mental
>> models of people, too.]  In order for Mary to "know" about Jack's
>> dispositions, motives, and possessions, Mary has to build inside her
>> head some structure to help answer those kinds of questions - and that
>> structure will constitute her mental model of Jack.  J psychological
>> queries like @i["What are Jack's ideals?"]  Quite possibly, Mary's
>> model of Jack will be able to produce more accurate answers to such
>> questions than Jack himself could produce.  For people's mental models
>> of their friends are often better, in certain respects, than their
>> mental models of themselves.
>> 
>> We all make models of ourselves, and use them to predict which sorts
>> of things we'll later be disposed to do.  Naturally, our models of
>> ourselves will often provide us with wrong answers, because they
>> aren't really faultless ways to see ourselves, they're merely
>> self-made answering- machines.
>
>
>  Okay, but just how OFTEN do we (i.e., ourselves) actually make use of
>these self-referential models, except maybe to communicate intention to
>other agents, who then use this information to update THEIR models of US
>(this, of course, assumes we all agree that we build complex mental models).
>If we use these models to OUR benefit at all, it escapes me what good they
>are, except maybe in a planning situation (but then, if one's life is so
>rigid as to adhere to plans made at some prior time regardless of intervening
>events, one deserves the consequences).

Try reading the next 3 or 4 pages of _The Society of Mind_.  The idea
is that these models may be vital in the sense of stabilizing our
social personalities, so that you can make longer range plans and
actually carry them out.  Your life needs some rigidity to be
interesting, just as it must also have ebough flexibility.  And you
may not be aware of this yourself but (as I hinted above) your friends
(and that includes me) may see more stability in you than you might
want to believe.


