From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usenet.coe.montana.edu!milton!forbis Wed Dec 18 16:01:56 EST 1991
Article 2169 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:2169 sci.philosophy.tech:1448
Newsgroups: comp.ai.philosophy,sci.philosophy.tech
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usenet.coe.montana.edu!milton!forbis
>From: forbis@milton.u.washington.edu (Gary Forbis)
Subject: Re: Meaning and Agency
Message-ID: <1991Dec16.195714.14302@milton.u.washington.edu>
Organization: University of Washington, Seattle
References: <1991Dec13.164821.6536@husc3.harvard.edu> <1991Dec16.070537.7377@milton.u.washington.edu> <1991Dec16.100707.6637@husc3.harvard.edu>
Date: Mon, 16 Dec 1991 19:57:14 GMT

In article <1991Dec16.100707.6637@husc3.harvard.edu> zeleny@brauer.harvard.edu (Mikhail Zeleny) writes:
>In article <1991Dec16.070537.7377@milton.u.washington.edu> 
>forbis@milton.u.washington.edu (Gary Forbis) writes:

MZ:
>>>(1) Saying that an agent A meant p by s is equivalent to saying that A
>>>intended the utterance of s to produce a specifiable effect with the
>>>propositional content p in his audience by means of their recognition of
>>>his intention in the context of s.  The propositional content p can be
>>>uniquely associated with the equivalence class of sentence-tokens
>>>synonymous with s in the contexts of their utterance, as determined by the
>>>semantic conventions of the language employed by A.  Note that p is not
>>>necessarily open to the awareness of A; in other words, A doesn't
>>>necessarily know what he means by s, as opposed to what he intends it to
>>>convey.  He must nevertheless commit himself to the objective meaning of
>>>his utterance, which transcends his subjective intention, and can only be
>>>determined culturally and contextually.

By further clarification I now understand "He" in the last sentence to refer
to the agent "A".  I would certainly like to commit myself to the objective
meaning of my utterances.  To do so I have to have a better awareness of the
objective meaning of my utterances.  I will come back to this.

GF:
>>I'm a little confused by this.  May I assume that the agent "Saying that
>>an agent A meant p by s" need not be A?  That is to say, the language 
>>employed by A need not be available to the agent saying "agent A meant
>>p by s"?

MZ:
>Yes on both counts.

MZ:
>>>(2) Saying that s meant p is equivalent to saying that there exists an
>>>intentional causal relation between the occurrence of s and an agent's
>>>prior meaning p by a type-identical utterance s' meaning p.

I'm not sure I understand intentionality sufficiently to comment at this time.
I will come back to this also.

MZ:
>>>(3) Saying that s means p is equivalent to saying that the occurrence of s
>>>can be causally associated with a type-identical possible utterance of s'
>>>by an assumed agent meaning p.

OK.  Consider a robot running some program.  The robot has motor control over
some wheels which gives it mobility but has no arms.  People use this robot to
send mail between desks.  It can generate several utterances among which are:
"I have mail for you $name." (where $name is replaced by the name of the
individual owning the desk at which the robot has arrived).  This activity is
coupled with the robot emptying an internal bin containing the mail for $name
into a tray on the top of the robot.

Is it too much to assume that "I have mail for you Gary" and the ejection of
the mail on the robots top tray is causally associated with a type-identical
utterance by an assumed agent meaning "Here is your mail.  Take it"?  If not
then does this mean that the robot intends for me to take my mail when it
utters "I have mail for you Gary"?

GF:
>>While there is an infinite number of possible utterances in any language
>>there are a finite number of utterances by any particular philosopher.  Within
>>these constraints there are an infinite number of possible languages for
>>which the propostional contents P can be uniquely associated with the
>>equivalence class of sentence-tokens synonymous with the sentences S
>>uttered by A.  (well it seems formal enough, I hope it has some content.)
>>I think one can refer to Geodel's incompleteness theorem for a sketch of
>>such a proof.

MZ:
>I fail to understand your point.  My claim that the propositional content p
>can be uniquely associated with the equivalence class of sentence-tokens
>synonymous with s in the contexts of their utterance was only meant to
>illustrate the relationship between the concept of synonymy and the notion
>of a sentence expressing its propositional content.

I'm thinking along the line that to the extent there are but a finite utterances
by any human the set of propositions associated with these sentences is also
finite.  A FSA could contain this set of propositions could it not?  

GF:
>>This being the case, I don't know how an agent can know what she means by
>>any particular utterance in that even to oneself one has a finite set of
>>conscious thoughts (or may I be so presumptious?)

MZ:
>An agent may very well ignore the meaning of his words, particularly if he
>fails to use them responsibly.  We are not masters of the social convention
>which determines this meaning.  However, this doesn't relieve us from the
>responsibility to be commited to the meaning of our words in virtue of
>having uttered them.

I am trying to choose the words which convey my intended meanings.  I don't
know why my commitment should go beyond this.  If I choose the incorrect 
words, as determined by the response I get, am I not allowed to choose new
words rather than commit myself to the meaning of the incorrect words?  In
that there is only a certain extent to which I, by virtue of my limited
set of experiences with the words, can know the meanings of the words it seems
a bit much to assume such a responsibility for their correct use.  Isn't it
enough that one assumes a responsibility to the extent possible one uses the
correct words for the correct meanings and adjusts the words used as necessary?

--gary forbis@u.washington.edu


