From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!yale.edu!jvnc.net!darwin.sura.net!mojo.eng.umd.edu!mimsy!kohout Thu Dec 26 23:57:11 EST 1991
Article 2279 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:2279 sci.philosophy.tech:1525
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!yale.edu!jvnc.net!darwin.sura.net!mojo.eng.umd.edu!mimsy!kohout
>From: kohout@cs.umd.edu (Robert Kohout)
Newsgroups: comp.ai.philosophy,sci.philosophy.tech
Subject: Re: Causes and Reasons
Keywords: reasons and causes
Message-ID: <45209@mimsy.umd.edu>
Date: 19 Dec 91 16:51:42 GMT
References: <1991Dec15.120726.6592@husc3.harvard.edu>
Sender: news@mimsy.umd.edu
Followup-To: comp.ai.philosophy
Organization: U of Maryland, Dept. of Computer Science, Coll. Pk., MD 20742
Lines: 45

In article <1991Dec16.002259.6621@husc3.harvard.edu>(Mikhail Zeleny) writes:
>
>The burden of circumscribing causal relations is borne by the semantics of
>your program specification; since every operation of a Turing machine
>reduces to a purely syntactic manipulation, the operation of the said
>Turing machine cannot determine the semantical properties of the program.

Can a machine be said to possess semantical properties independant of some
external observer? I'm not sure whether you're saying that a TM will never
know what a human thinks it is doing, or that it can never really understand
itself. In either case, I don't see why anyone should consider this a
blow to the symbolic paradigm. If, on the other hand, you mean to imply
some a priori semantic reality which exists independent of any observer,
and which cannot be captured by syntactic manipulations, don't you need also
show that humans have somehow tapped into this? Or can we safely follow
Searle and say that humans obviously have _it_ , whatever it may be, and
syntax just as obviously doesn't? It just isn't that obvious to me.

>Consequently, any understanding that may have produced the said program
>cannot, in principle, be captured by it, though it may very well be
>interpreted by a human agent perusing it.  This is a natural consequence of
>Searle's position, which is incontrovertible as stated.  Any failure to
>understand this point is due to the addressee.

"Any understanding that may have produced the said program..." Whose
understanding? You certainly aren't expecting a machine to read minds,
nor can you expect it to be human. What understanding are we talking about?
If I write a program that does nothing but reproduce itself, why should
I expect it to "know" that, just because I do? Isn't all the information
needed to produce the program contained in the program itself?
Or do you mean that a program can't really reproduce itself, because 
a program must be run on a hardware device? If we're just saying that
programs are meaningless independent of hardware, then once again I
fail to see how this impinges on hard AI. 

It seems to me that all semantics are relevant to the goals and 
representations of the system in which they exist. This explains
why I have such trouble understanding much of this debate. If I
can't understand Zeleny, why should I expect a machine to? Why
hold machines to a higher standard? Why don't the internal goals
and representations of a running program qualify as machine semantics?
What more is required, besides will and representation?

Bob Kohout



