From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!wupost!psuvax1!hsdndev!husc-news.harvard.edu!zariski!zeleny Thu Dec 26 23:58:07 EST 1991
Article 2364 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:2364 sci.philosophy.tech:1578
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!wupost!psuvax1!hsdndev!husc-news.harvard.edu!zariski!zeleny
>From: zeleny@zariski.harvard.edu (Mikhail Zeleny)
Newsgroups: comp.ai.philosophy,sci.philosophy.tech
Subject: Re: Machine Translation
Message-ID: <1991Dec22.133103.6870@husc3.harvard.edu>
Date: 22 Dec 91 18:31:01 GMT
References: <1991Dec21.111459.2302@arizona.edu> <1991Dec21.164621.6848@husc3.harvard.edu> <1991Dec21.184951.2303@arizona.edu>
Distribution: world,local
Organization: Dept. of Math, Harvard Univ.
Lines: 140
Nntp-Posting-Host: zariski.harvard.edu

In article <1991Dec21.184951.2303@arizona.edu> 
bill@NSMA.AriZonA.EdU (Bill Skaggs) writes:

Mikhail Zeleny:
>>>Simple.  Correct translation is a matter of finding an approximate synonym;
>>>synonymy is a semantic relation; if machines can't compute semantic
>>>relations, they can't translate anything.

Bill Skaggs:
>>Synonymy is a semantic relation, but it may correspond (by "coincidence")
>>to a syntactic relation, for example when translating from English
>>to Pig Latin.  So the conclusion does not follow.

MZ:
>>There are two issues here.  One is formal: for first-order languages,
>>semantic relations can't be determined by syntax pace L\"owenheim-Skolem.
>>On the other hand, the structure of natural languages is not likely to be
>>first-order.  In this case, simpler considerations like that of Putnam's
>>permutation trick, come to bear on the issue; refer to a parallel thread.

BS:
>I agree; that's why I used the word "coincidence".  I am quite happy
>to accept that syntax does not determine semantics; I merely deny that
>this poses a difficulty for AI.  All it shows is that the relationship
>between syntax and semantics is something like a "synthetic *a priori*".

I'm glad you can see the light.  Note, however, that in denying the
difficulty of the problem, you are essentially postulating an unwarranted
expectation of a miracle, however consonant with the eschatological, hence
antiscientific, self-definition of the AI field.  An honest scientist would
set as his goal the study of human intelligence, rather than the dubious
task of replicating it by artificial means; it's as if a physician
announced that the goal of his art was to procure immortality, rather than
to cure disease.  Likewise, you are wholly unwarranted in expecting your
"coincidence" to occur by a fiat.  

Incidentally, I fail to appreciate the relevance of your Kantian allusion.

MZ:
>>On the other hand, you might wish to maintain that the semantic properties
>>of two natural languages may be isomorphic, inducing a syntactical
>>isomorphism; or perhaps that in the absense of such isomorphism, the
>>synonymy transformation might be coextensive with some syntactical
>>manipulation.  The fallaciousness of this claim can be seen even in cases
>>of closely related languages like English and French, by considering, e.g.
>>the French word `conscience', ambiguously translatable as `conscience' or
>>`consciousness'.  Furthermore, the question of figurative meaning transfer
>>(e.g. as evidenced in the use of metaphor, irony, etc.)  is rightly
>>considered to be intractable not only by purely syntactic, but even by
>>semantic means.

BS:
>I think you are right, but a number of clever philosophers (e.g. Jerry
>Fodor) would disagree.  Fodor believes that there is a "language of
>thought" common to all human beings, into which all utterances are
>more or less mechanically translated.  This translation process would
>induce an isomorphism between different languages.

Fodor is welcome to his "Mentalese"; I think that his hypothesis is
orthogonal to the issue of functionalist reduction of semantics, which he
in any case rejects in "A Theory of Content", however relevant it might be
to estabilishing your claim.  In any case my rhetoric-based objection would
still stand.

MZ:
>>>The fact that semantic relations are non-recursive is a direct consequence
>>>of G\"odel's Second Incompleteness theorem.  In any language containing
>>>elementary arithmetic, as well as a recursive semantic relation "...
>>>expresses ...", we may apply the arithmetization trick to the said relation
>>>with predictable results.

BS:
>>This argument has been conclusively refuted many times.  I don't
>>feel like writing it out all over again.  

MZ:
>>Not only is this spectacularly arrogant claim made in error; it is also
>>remarkably hypocritical, given the vehement demands for self-contained
>>elementary explanations you made at the time of our initial exchange
>>several weeks ago. 

BS:
>Sorry, I just couldn't resist the urge to copy your own style of
>argument for once; it's so much easier.

...the difference being, I only do this sort of thing when I am right.

MZ:
>>Incidentally, if you really believe yourself to be
>>capable of refuting this argument, I urge you to publish: Thomason and
>>Putnam, the authors of different versions thereof would be happy to
>>consider your rebuttal of their views.  This is a genuine conundrum, so to
>>arrropriate the words of John McCarthy, if you can solve it, you'll become
>>well known for more than invective.

BS:
>Well, I'll sketch the refutation, but you won't like it.  The essential
>claim is that human minds are effectively no more powerful than finite
>state automata.  Therefore semantics, in the strong sense in which
>you use the word (and the sense it must have for the theorem to apply),
>is something humans are not capable of possessing.  Therefore human
>language cannot have semantics in this sense; therefore G\"odel's
>result need not apply to human language.

In my circles, this sort of argument is called petitio principii.  I see
absolutely no reason to assume that human minds are effectively no more
powerful than finite state automata.

BS:
>(To anticipate at least one objection:  All of the empirical evidence
>for human's "referring" to things can potentially by duplicated by
>finite state automata, so empirical data cannot possibly prove the
>existence of reference (in the strong sense).)

Is this sort of potentiality any different than the one Penrose would
appeal to in claiming that he can potentially apply the reflection
principle to any formal system he judges to be consistent?  Yes, it is:
unlike Penrose, you have no clear idea of how to go about actualizing your
claim.  Moreover, even should I grant you observation, you still won't have
introspection.  So there.

>	-- Bill
>
>I will be away for the next week, so I will not be able to follow
>up on responses.  Merry Christmas.

That's why I shall repost my reply.  Happy New Year.


`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'
: Qu'est-ce qui est bien?  Qu'est-ce qui est laid?         Harvard   :
: Qu'est-ce qui est grand, fort, faible...                 doesn't   :
: Connais pas! Connais pas!                                 think    :
:                                                             so     :
: Mikhail Zeleny                                                     :
: 872 Massachusetts Ave., Apt. 707                                   :
: Cambridge, Massachusetts 02139           (617) 661-8151            :
: email zeleny@zariski.harvard.edu or zeleny@HUMA1.BITNET            :
:                                                                    :
'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`


