From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!micro-heart-of-gold.mit.edu!news.media.mit.edu!minsky Tue Mar 24 09:57:01 EST 1992
Article 4571 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:4571 sci.philosophy.tech:2342
Newsgroups: comp.ai.philosophy,sci.philosophy.tech
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!micro-heart-of-gold.mit.edu!news.media.mit.edu!minsky
>From: minsky@media.mit.edu (Marvin Minsky)
Subject: Re: Causes and Goals
Message-ID: <1992Mar18.190801.29917@news.media.mit.edu>
Sender: news@news.media.mit.edu (USENET News System)
Cc: minsky
Organization: MIT Media Laboratory
References: <1992Mar17.110436.9937@husc3.harvard.edu> <1992Mar18.044516.28882@nuscc.nus.sg> <6429@skye.ed.ac.uk>
Date: Wed, 18 Mar 1992 19:08:01 GMT
Lines: 93

In article <6429@skye.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>In article <1992Mar18.044516.28882@nuscc.nus.sg> smoliar@iss.nus.sg (stephen smoliar) writes:
>>In article <1992Mar17.110436.9937@husc3.harvard.edu> zeleny@zariski.harvard.edu
>>(Mikhail Zeleny) writes:
>>>
>>>A cloud is not an agent.
>>>
>>Appealing as this assertion is to the intuition, I would like to pursue an
>>angle for questioning it.  My theme is basically a variation on the approach
>>to models which Minsky took in "Matter, Mind, and Models" (which I recently
>>cited and Marvin subsequently developed).  Just as the question of whether
>>or not an entity A* constitutes a model of another entity A can only be
>>resolved in the context of some postulated observer B, so I would argue
>>that whether or not any entity X is an agent cannot be resolved in terms
>>of necessary and sufficient conditions on the attributes of X.  Rather,
>>one can only address whether or not a given observer Y is attributing agency
>>to X.
>
>This seems a rather elaborate way of saying that there's no fact
>of the matter as to whether something is an agent or not.  All we
>can say is "Y is attributing agency, but Z is not".
>
>But when someone is attributing agancy, what is it that they're
>attributing?  What properties do they think an agent has?
>
>If there's a different set of properties in every case, how can we
>say they're all talking about the same thing (agency)?  And if there
>are some common properties, we have a way of assessing agancy apart
>from whether someone is attributing it.  (Whether the result is a
>set of necesssary and sufficient conditions or, say, a "family
>resemblance" is another matter.)

Below is a statement about this.   As for your question, we generally
cannot be sure they're talking about the same thing. 

>From "The Society of Mind"  page=302]  [MENTAL MODELS]

Does a book know what is written inside it?  Clearly, no.  Does a book
@i[contain] knowledge?  Clearly, yes.  But how could anything contain
knowledge, yet not know it?  We've seen how saying that a person or
machine possesses knowledge can amount to saying that @i[some observer
could employ that person or machine to answer certain kinds of
questions.]  Here is another view of

        "Jack knows about @b[A]" means that there is a "model" of
@b[A] inside Jack's head.

But what does it mean to say that one thing is a model of another?
Again we have to specify some standard or authority.  Let's make Jack
be the judge of that:

        Jack considers @B[M] to be a good model of @b[A] to the extent
that he finds @B[M] useful for answering questions a[


For example, suppose that @b[A] is a real automobile and @B[M] is the
kind of object that we call a "toy" or "model" car.  Then Jack will be
able to substitute @B[M] for @b[A] or use @B[M] to answer certain
questions about @b[A].  However, we would think it strange to say that
@B[M] is Jack's "knowledge" about @b[A] - because s. Accordingly, a
person could possess a "mental model," too - in the form of some
process or sub- society of agents inside the brain.  This provides us
with a simple explanation of what we mean by knowledge: @i[Jack's
knowledge about @b[A] is simply whichever mental models, processes, or
agencies Jack's other agencies can use to answer questions about
@b[A].]  Thus, a person's mental model of a car need not itself
resemble an actual car in any obvious way.  It need not itself be
heavy, fast, or consume gasoline, to be able answer questions about a
car, like @i["How heavy is it?"] or @i["How fast can it go?"]

Our mental models also work in social realms, to answer questions
like, @i["Who owns that car?"]  or @i["Who allowed you to park it
there?"] However, to understand questions like these, we have to ask
what people mean by "who" - and the answer is that @i[we make mental
models of people, too.]  In order for Mary to "know" about Jack's
dispositions, motives, and possessions, Mary has to build inside her
head some structure to help answer those kinds of questions - and that
structure will constitute her mental model of Jack.  J psychological
queries like @i["What are Jack's ideals?"]  Quite possibly, Mary's
model of Jack will be able to produce more accurate answers to such
questions than Jack himself could produce.  For people's mental models
of their friends are often better, in certain respects, than their
mental models of themselves.

We all make models of ourselves, and use them to predict which sorts
of things we'll later be disposed to do.  Naturally, our models of
ourselves will often provide us with wrong answers, because they
aren't really faultless ways to see ourselves, they're merely
self-made answering- machines.






