From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!europa.asd.contel.com!darwin.sura.net!haven.umd.edu!mimsy!kohout Thu Dec 26 23:57:59 EST 1991
Article 2353 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:2353 sci.philosophy.tech:1566
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!europa.asd.contel.com!darwin.sura.net!haven.umd.edu!mimsy!kohout
>From: kohout@cs.umd.edu (Robert Kohout)
Newsgroups: comp.ai.philosophy,sci.philosophy.tech
Subject: Re: Machine Translation (was re: Searle's response to silicon brain?)
Message-ID: <45330@mimsy.umd.edu>
Date: 21 Dec 91 18:46:01 GMT
References: <1991Dec21.000014.6836@husc3.harvard.edu>
Sender: news@mimsy.umd.edu
Followup-To: comp.ai.philosophy
Organization: U of Maryland, Dept. of Computer Science, Coll. Pk., MD 20742
Lines: 59

In article <1991Dec21.000014.6836@husc3.harvard.edu> zeleny@zariski.harvard.edu (Mikhail Zeleny) writes:
>In article <45303@mimsy.umd.edu> kohout@cs.umd.edu (Robert Kohout) writes:
>
>>       [...]              if someone in this debate sees how
>>the correctness of Searle's position in any way implies that,
>>for example, we will never be able to engineer a fully automatic,
>>high quality machine translator I wish they'd explain it
>>in a way that I could understand.
>
>Simple.  Correct translation is a matter of finding an approximate synonym;
>synonymy is a semantic relation; if machines can't compute semantic
>relations, they can't translate anything.
>
>The fact that semantic relations are non-recursive is a direct consequence
>of G\"odel's Second Incompleteness theorem.  In any language containing
>elementary arithmetic, as well as a recursive semantic relation "...
>expresses ...", we may apply the arithmetization trick to the said relation
>with predictable results.
>
Is it really this simple? My interpretation of G\"odel's Theorem is
somewhat different from yours, but it still seems isomorphic. All
we get from G\"odel is a proof that no formal system can prove *all*
the the true theorems it can express. Correct me if I'm wrong, but
all this implies for language is that no digital machine will be
semantically complete. Is that necessary for machine translation?
Do we have any reason to believe that humans are any "better" at this?

To put it another way, it seems to me that all we get from G\"odel's
Theorem here is a statement that our machine might not always be right
(and even the word "might" is important, since it may be that the machine
will never see the sentences it cannot translate.), just as I may have
easily misinterpreted your response.

Mind you, I'm not saying you're wrong, the lightbulb just hasn't gone
off for me yet. For that matter, your response has further elucidated
the problem I have with Searle. In defending Searle's position, you
have chosen to defend his belief that syntax is neither sufficient for
nor constituent of semantics. Assuming that is correct, I don't see
how that can be reconciled with his position that we can model a brain
down to the last synapse without reproducing semantics. If I can model
a brain down to the last synapse, that implies that I can pretty much
reproduce its behavior, does it not? And if I have the brain of
a well trained Russian-English translator, and I want to translate an
English sentence into Russian, I can just input the English and wait
for the output.  It wouldn't be much of a model if it didn't
behave the same as the brain we're modeling, so we can pretty much 
expect a decent Russian translation to be produced. How can this be
reconciled with Searle's position that a) we need semantics to translate
and b) the simulated brain doesn't have 'em?

If you believe that the simulated brain really won't behave the same
as the real brain, why? We've modeled the damn thing right down to the
last synapse. Every time a real brain neuron fires, a corresponding
neuron in the simulated brain fires. Where and why would their behavior 
diverge?

I hope you see my predicament. The two statements seem incompatible to me.

Bob Kohout


