From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rutgers!uwm.edu!ogicse!das.harvard.edu!husc-news.harvard.edu!zariski!zeleny Thu Dec 26 23:58:37 EST 1991
Article 2409 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:2409 sci.philosophy.tech:1629
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rutgers!uwm.edu!ogicse!das.harvard.edu!husc-news.harvard.edu!zariski!zeleny
>From: zeleny@zariski.harvard.edu (Mikhail Zeleny)
Newsgroups: comp.ai.philosophy,sci.philosophy.tech
Subject: Re: Causes and Reasons
Keywords: intensionality, agency, causation, syntax, semantics, pragmatics
Message-ID: <1991Dec25.193244.6921@husc3.harvard.edu>
Date: 26 Dec 91 00:32:42 GMT
References: <1991Dec19.133719.22212@oracorp.com> <1991Dec23.041134.6879@husc3.harvard.edu> <1991Dec23.173605.15690@milton.u.washington.edu>
Organization: Dept. of Math, Harvard Univ.
Lines: 286
Nntp-Posting-Host: zariski.harvard.edu

In article <1991Dec23.173605.15690@milton.u.washington.edu> 
forbis@milton.u.washington.edu (Gary Forbis) writes:

>When I focus on specific parts of posts it is not to trip anyone up but rather
>to understand specific points.

Nice to know this.

>In article <1991Dec23.041134.6879@husc3.harvard.edu>
>zeleny@zariski.harvard.edu (Mikhail Zeleny) writes:

>>In article <1991Dec19.133719.22212@oracorp.com> 
>>daryl@oracorp.com writes:

>DMC = Daryl McCullough
>MZ = Mikhail Zeleny

MZ:
>>a computer program, or a Turing machine, is possessed
>>only of formal syntactical structure, which neither determines its
>>interpretation (semantics), nor the causal effects thereof (pragmatics).
>>The former is determined by the compiler, the latter -- by the machine
>>architecture and operation.

GF:
>If I understand this I mostly agree with it.  This is quite an accomplishment
>for me; Mikhail is very hard to understand.  Becuase I find this so hard when
>the author can be questioned it is no wonder I fail to understand works of
>long dead philosophers.  The problem I have is that in order to understand this
>I think I have to apply the same standards to humans.

I have long maintained that interpretation is an essentially creative act.
If you wish to start a separate thread on hermeneutics, I'll follow up.

DMC:
>>>the first
>>>compilers were implemented by humans, but since then most compilers
>>>have been bootstrapped; one uses earlier versions of a compiler to
>>>compile later versions of the very same compiler.

MZ:
>>As I write these words on the screen of my terminal, I expect them to
>>be reproduced on thousands, perhaps millions of other screens around the
>>world by scores of newsreader programs.  Yet I should hope that whoever
>>reads them would know enough to ascribe their authorship to me, rather than
>>to the software involved in their transmission.  On the other hand, the
>>felicity of the latter is undoubtedly due to the authors of the software.
>>The moral: pay attention to the division of labor, and make sure to give
>>credit where credit is due.

GF:
>Why do we not consider our being as the results of those who teach us?

Because we are too conceited?  Our teachers certainly influence us
causally; however I don't think that our causal powers are limited to our
causal influences.

GF:
>My use of language did not occur full-blown but has developed over time.
>Do the "semantics" you use exist outside the language or within it?  Sometimes
>I see myself as the pattern that exist due to the forces acting upon me.
>My complexity may be due to the complexity of the environment in which I
>exist.  I may be brainwashed but my intuition does not lead me to believe
>I have some powers other entities do not.

I think that my causal powers are fundamentally different from those of a
rock.  Your opinion may differ; that's your prerogative.  As far as I know,
no one has ever managed to refute the free will thesis.

DMC:
>>>Certainly writing a compiler program is a creative act, as is the
>>>writing of any program. However, interpreting programs (in the sense
>>>of going from syntax to action, not in the sense of going from syntax
>>>to meaning; I have been focusing on "causal powers") is pretty
>>>uncreative, and pretty dull (which is why we get machines to do it).

MZ:
>>Somebody has to design the machines to do it.

GF:
>Here is one of the main complaints voiced by strong AI proponents.  While
>it is easier (right now) to design the machines, it is not clear that anyone
>has to design them.  It is sufficient that they exist.  The experiments with
>genetic algorithms show that it is possible to have the environment select for
>greater complexity.  It seems to me that many programs that exist today do
>not have an author but exist as a matter of their prior existence and the
>environment in which they exist (the programmers who modify them and the tasks
>they are assigned to do).

Evolutionary theory is just a way of legitimizing the talk of final causes
in a physicalist setting.  Since I am not in any way beholden to
physicalism, I am free to develop my teleology on the basis of individual
volition.  The present state of physical theory being what it is, the
clockwork model of the universe is no longer viable, and hence needn't
concern me.  Thus I have no problem seeing the machines as the result of a
conscious, intentional design.  As for your remark that it is sufficient
thet they exist, I don't find it terribly illuminating in terms of
explanatory power.

MZ:
>>If semantic interpretation
>>is not determined by the syntax, nor are the pragmatic consequences.  This
>>is intensionality in action: if I say to you, "Go catch a falling star",
>>it's up to you whether to interpret my request literally or metaphorically.
>>And so it is in all other, much simpler cases.

GF:
>Where intensionality is limited to noetic agency?  This is why I asked you
>if the words uttered by a machine could be interpreted as having meaning 
>wheather the machine could be said to have intensionality and therefore
>noetic agency.

Attention: unlike intentionality, intensionality has nothing to do with
volition, but is simply a matter of stronger identity conditions.  Sorry
about neglecting to answer your earlier message, but I think thsat
ascribing agency to a robot is as silly as ascribing it to a TV set.

GF:
>If a program produces "edit, abort, send?" on the screen am I not to interpret
>this as a request for an answer?  If I enter "send" this article will be
>transmitted to thousands of machines costing the network hundreds if not
>thousands of dollars.  I type "send" but I do not send the article.  The
>programmer did not "intend" for me to send this article when he or she created
>this program.  Neither the programmer nor I intend for this article to reach
>any particular machine.  By whose intention does this article reach your
>machine?

THe programmer intended to offer a certain choice; the system operator
intends to supply the necessary conditions for realizing this choice.
Either way, the machine is just a channel of communication; the program --
its medium.

DMC:
>>>>>Anyway, a program is a mathematical description of a class of
>>>>>machines. When someone says that the program has this or that
>>>>>property, they are only talking about the correct implementations: to
>>>>>say that I is an incorrect implementation of program P is to say that
>>>>>I is *not* an implementation of P.

MZ:
>>>> Call it what you will, but correctness of an interpretation is a
>>>> non-recursive notion.

GF:
>Ah, I understand this!  There are an infinite number of interpretations.
>The machine implements one by virtue of pragmatic embodied by it.  (I want
>to say semantics but I may draw some flak.)  I think the "correct" 
>interpretation of a program is one in which the premises not specifically
>mentioned are undefined, that is there is an infinite number of correct
>interpretations.  If the agent describing the program means a subset of
>these interpretations (still and always infinite) then the agent is being
>sloppy about describing the program.

On the contrary, I believe that there is but one correct interpretation of
any given text, determined by the intention of its sender, and the semantic
and pragmatic conventions of his medium.

DMC:
>>>Why is that relevant? The claim we are discussing is whether every
>>>correct implementation of a program will have certain causal powers,
>>>not whether you or I or a computer can recognize all correct
>>>implementations.

GF:
>And indeed, This is the same question.  I see that others also believe that
>any correct implementation will have certain causal powers.  This does not
>mean that all causal powers of any correct implementation will be fully
>defined by the program.

That's "*some* causal powers"...

MZ:
>>Consider that the intensionality order is: syntax < semantics < pragmatics. 
>>
>>In other words, producing the correct consequences takes even more
>>creativity than figuring out the correct interpretation.
>
>It is not clear that there is an entity defined by "the correct
>interpretation."  Many feel that the agent defining the program is wrong
>if it assumes there is (unless all undefined premises are assigned
>the truth value "true" or "false".)

DMC:
>>>>>You are drifting away from Chalmer's original point: the meaning of a
>>>>>program is a machine with certain causal properties; properties of the
>>>>>form "inputing a 5 will cause the output of 25", or whatever. An
>>>>>implementation of this program will have this causal property by
>>>>>virtue of what it *means* to be an implementation.

MZ:
>>>>Quite so.  However note that, if your process of "inputing a 5 will cause
>>>>the output of 25" is construed as a physical activity, then I have argued
>>>>that the physical causal powers of a program's implementation are
>>>>irreducibly intensional with respect to, and non-emergent from its logical
>>>>structure, even when the latter is construed semantically, as interpreted
>>>>by a conscious agent.

GF:
>Well, since I can't fully understand this I will take it to mean that 
>there is some question as to wheather or not numbers are physical though it
>is accepted that they have existence.  I understand that numbers and programs
>have the same physical or non-physical existence.  Can I take it that "5"
>has the same relationship to the number 5 as "Print 5" has to the basic program
>Print 5?

I don't care about the way the numeral-tokens (not the numbers) are
implemented; my point has to do with the fact that the numeral-manipulation
is not fully determined by the program syntax, nor even by its semantics.

DMC:
>>>I don't know what that paragraph means. Let me just reiterate my
>>>claim: the logical structure of a program causes certain behavior in a
>>>physical computer running the program. The behavior produced is itself
>>>causal; it can cause email messages to be sent, it can set off a burglar
>>>alarm, it can multiply numbers together.

MZ:
>>Look Daryl, I don't know how to explain this any clearer.  Once again, the
>>logical structure of the world is less finely differentiated than its
>>physical, causal structure, or even its mathematical structure, as
>>evidenced by the failure of logicism; which is to say that mathematics,
>>and, a fortiori, physics, introduce more assumptions about the world than
>>does logic alone.  So the logical structure of a program cannot, in and of
>>itself, induce a physical, causal structure of its execution by a computer;
>>it takes extra constraining to achieve this effect, and insofar as it
>>involves interpretation, the job of furnishing the extra constraints is
>>essentially creative.

GF:
>In the same way our thoughts cannot, in and of themselves, induce a physical,
>causal structure in our body?

Perhaps.  However, in spite of allowing David Chalmers the possibility of
the above, I would guess it's the other way around: the psychological
structure of our thoughts is, if anything, intensional with respect to the
physical structure of our bodies.

MZ:
>>>>Which is to say that meaning is a burden that has to be borne by
>>>>consciousness.

GF:
>Which has no causal significance?

That's epiphenomenalism.  A ridiculous thesis, if you ask me.

DMD:
>>>Sure. What this thread is ultimately about is whether a computer can
>>>have consciousness. Searle said no, because it doesn't have the right
>>>causal properties. Now, are you saying that it can't have the right
>>>causal properties because it doesn't have consciousness?

MZ:
>>No.  I am saying that the computer is not an agent, but a mere device that
>>extends the active powers of those who build and program it; in other
>>words, it can only "act" metaphorically, on behalf of its creators.

GF:
>Now wait a sec.  Doesn't a computer have somatic agency?  I get so confused.
>Can't machines act independently from their creators and have the these
>acts interpreted metaphorically by any noetic agent?  Isn't the creative
>act of the creator completed when the machine is created and all further
>creative acts those of the interpreter(s)?

No.  Somatic agency = volition; noetic agency = conscious, intentional
volition.  I don't place much stock in metaphorical ascriptions of
intentionality.  A machine doesn't act, but merely transmits the agency of
its creator and its operator.  At this point, some scholastic theology
would be of help in sorting out this issue; alas, I'm no expert.

>>>Daryl McCullough
>>: Mikhail Zeleny                                                     :
>--gary forbis@u.washington.edu


`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'
: Qu'est-ce qui est bien?  Qu'est-ce qui est laid?         Harvard   :
: Qu'est-ce qui est grand, fort, faible...                 doesn't   :
: Connais pas! Connais pas!                                 think    :
:                                                             so     :
: Mikhail Zeleny                                                     :
: 872 Massachusetts Ave., Apt. 707                                   :
: Cambridge, Massachusetts 02139           (617) 661-8151            :
: email zeleny@zariski.harvard.edu or zeleny@HUMA1.BITNET            :
:                                                                    :
'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`


