From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!uwm.edu!ogicse!milton!forbis Thu Dec 26 23:58:20 EST 1991
Article 2383 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:2383 sci.philosophy.tech:1594
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!uwm.edu!ogicse!milton!forbis
>From: forbis@milton.u.washington.edu (Gary Forbis)
Newsgroups: comp.ai.philosophy,sci.philosophy.tech
Subject: Re: Causes and Reasons
Keywords: intensionality, agency, causation, syntax, semantics, pragmatics
Message-ID: <1991Dec23.173605.15690@milton.u.washington.edu>
Date: 23 Dec 91 17:36:05 GMT
Article-I.D.: milton.1991Dec23.173605.15690
References: <1991Dec19.133719.22212@oracorp.com> <1991Dec23.041134.6879@husc3.harvard.edu>
Organization: University of Washington, Seattle
Lines: 197

When I focus on specific parts of posts it is not to trip anyone up but rather
to understand specific points.

In article <1991Dec23.041134.6879@husc3.harvard.edu> zeleny@zariski.harvard.edu (Mikhail Zeleny) writes:
>In article <1991Dec19.133719.22212@oracorp.com> daryl@oracorp.com writes:
DMC = Daryl McCullough
MZ = Mikhail Zeleny

MZ:
>a computer program, or a Turing machine, is possessed
>only of formal syntactical structure, which neither determines its
>interpretation (semantics), nor the causal effects thereof (pragmatics).
>The former is determined by the compiler, the latter -- by the machine
>architecture and operation.

If I understand this I mostly agree with it.  This is quite an accomplishment
for me; Mikhail is very hard to understand.  Becuase I find this so hard when
the author can be questioned it is no wonder I fail to understand works of
long dead philosophers.  The problem I have is that in order to understand this
I think I have to apply the same standards to humans.

DMC:
>>the first
>>compilers were implemented by humans, but since then most compilers
>>have been bootstrapped; one uses earlier versions of a compiler to
>>compile later versions of the very same compiler.

MZ:
>As I write these words on the screen of my terminal, I expect them to
>be reproduced on thousands, perhaps millions of other screens around the
>world by scores of newsreader programs.  Yet I should hope that whoever
>reads them would know enough to ascribe their authorship to me, rather than
>to the software involved in their transmission.  On the other hand, the
>felicity of the latter is undoubtedly due to the authors of the software.
>The moral: pay attention to the division of labor, and make sure to give
>credit where credit is due.

Why do we not consider our being as the results of those who teach us?
My use of language did not occur full-blown but has developed over time.
Do the "semantics" you use exist outside the language or within it?  Sometimes
I see myself as the pattern that exist due to the forces acting upon me.
My complexity may be due to the complexity of the environment in which I
exist.  I may be brainwashed but my intuition does not lead me to believe
I have some powers other entities do not.

DMC:
>>Certainly writing a compiler program is a creative act, as is the
>>writing of any program. However, interpreting programs (in the sense
>>of going from syntax to action, not in the sense of going from syntax
>>to meaning; I have been focusing on "causal powers") is pretty
>>uncreative, and pretty dull (which is why we get machines to do it).

MZ:
>Somebody has to design the machines to do it.

Here is one of the main complaints voiced by strong AI proponents.  While
it is easier (right now) to design the machines, it is not clear that anyone
has to design them.  It is sufficient that they exist.  The experiments with
genetic algorithms show that it is possible to have the environment select for
greater complexity.  It seems to me that many programs that exist today do
not have an author but exist as a matter of their prior existence and the
environment in which they exist (the programmers who modify them and the tasks
they are assigned to do).

MZ:
>If semantic interpretation
>is not determined by the syntax, nor are the pragmatic consequences.  This
>is intensionality in action: if I say to you, "Go catch a falling star",
>it's up to you whether to interpret my request literally or metaphorically.
>And so it is in all other, much simpler cases.

Where intensionality is limited to noetic agency?  This is why I asked you
if the words uttered by a machine could be interpreted as having meaning 
wheather the machine could be said to have intensionality and therefore
noetic agency.

If a program produces "edit, abort, send?" on the screen am I not to interpret
this as a request for an answer?  If I enter "send" this article will be
transmitted to thousands of machines costing the network hundreds if not
thousands of dollars.  I type "send" but I do not send the article.  The
programmer did not "intend" for me to send this article when he or she created
this program.  Neither the programmer nor I intend for this article to reach
any particular machine.  By whose intention does this article reach your
machine?

DMC:
>>>>Anyway, a program is a mathematical description of a class of
>>>>machines. When someone says that the program has this or that
>>>>property, they are only talking about the correct implementations: to
>>>>say that I is an incorrect implementation of program P is to say that
>>>>I is *not* an implementation of P.

>MZ:
>>> Call it what you will, but correctness of an interpretation is a
>>> non-recursive notion.

Ah, I understand this!  There are an infinite number of interpretations.
The machine implements one by virtue of pragmatic embodied by it.  (I want
to say semantics but I may draw some flak.)  I think the "correct" 
interpretation of a program is one in which the premises not specifically
mentioned are undefined, that is there is an infinite number of correct
interpretations.  If the agent describing the program means a subset of
these interpretations (still and always infinite) then the agent is being
sloppy about describing the program.

DMC:
>>Why is that relevant? The claim we are discussing is whether every
>>correct implementation of a program will have certain causal powers,
>>not whether you or I or a computer can recognize all correct
>>implementations.

And indeed, This is the same question.  I see that others also believe that
any correct implementation will have certain causal powers.  This does not
mean that all causal powers of any correct implementation will be fully
defined by the program.

MZ:
>Consider that the intensionality order is: syntax < semantics < pragmatics. 
>
>In other words, producing the correct consequences takes even more
>creativity than figuring out the correct interpretation.

It is not clear that there is an entity defined by "the correct
interpretation."  Many feel that the agent defining the program is wrong
if it assumes there is (unless all undefined premises are assigned
the truth value "true" or "false".)

DMC:
>>>>You are drifting away from Chalmer's original point: the meaning of a
>>>>program is a machine with certain causal properties; properties of the
>>>>form "inputing a 5 will cause the output of 25", or whatever. An
>>>>implementation of this program will have this causal property by
>>>>virtue of what it *means* to be an implementation.

MZ:
>>>Quite so.  However note that, if your process of "inputing a 5 will cause
>>>the output of 25" is construed as a physical activity, then I have argued
>>>that the physical causal powers of a program's implementation are
>>>irreducibly intensional with respect to, and non-emergent from its logical
>>>structure, even when the latter is construed semantically, as interpreted
>>>by a conscious agent.

Well, since I can't fully understand this I will take it to mean that 
there is some question as to wheather or not numbers are physical though it
is accepted that they have existence.  I understand that numbers and programs
have the same physical or non-physical existence.  Can I take it that "5"
has the same relationship to the number 5 as "Print 5" has to the basic program
Print 5?

DMC:
>>I don't know what that paragraph means. Let me just reiterate my
>>claim: the logical structure of a program causes certain behavior in a
>>physical computer running the program. The behavior produced is itself
>>causal; it can cause email messages to be sent, it can set off a burglar
>>alarm, it can multiply numbers together.

MZ:
>Look Daryl, I don't know how to explain this any clearer.  Once again, the
>logical structure of the world is less finely differentiated than its
>physical, causal structure, or even its mathematical structure, as
>evidenced by the failure of logicism; which is to say that mathematics,
>and, a fortiori, physics, introduce more assumptions about the world than
>does logic alone.  So the logical structure of a program cannot, in and of
>itself, induce a physical, causal structure of its execution by a computer;
>it takes extra constraining to achieve this effect, and insofar as it
>involves interpretation, the job of furnishing the extra constraints is
>essentially creative.

In the same way our thoughts cannot, in and of themselves, induce a physical,
causal structure in our body?

MZ:
>>>Which is to say that meaning is a burden that has to be borne by
>>>consciousness.

Which has no causal significance?

DMD:
>>Sure. What this thread is ultimately about is whether a computer can
>>have consciousness. Searle said no, because it doesn't have the right
>>causal properties. Now, are you saying that it can't have the right
>>causal properties because it doesn't have consciousness?

MZ:
>No.  I am saying that the computer is not an agent, but a mere device that
>extends the active powers of those who build and program it; in other
>words, it can only "act" metaphorically, on behalf of its creators.

Now wait a sec.  Doesn't a computer have somatic agency?  I get so confused.
Can't machines act independently from their creators and have the these
acts interpreted metaphorically by any noetic agent?  Isn't the creative
act of the creator completed when the machine is created and all further
creative acts those of the interpreter(s)?

>>Daryl McCullough
>: Mikhail Zeleny                                                     :
--gary forbis@u.washington.edu


