Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!news.mathworks.com!gatech!newsfeed.pitt.edu!dbisna.com!psinntp!psinntp!psinntp!psinntp!scylla!daryl
From: daryl@oracorp.com (Daryl McCullough)
Subject: Re: Mathematical Truth
Message-ID: <1995May4.043059.9819@oracorp.com>
Organization: Odyssey Research Associates, Inc.
Date: Thu, 4 May 1995 04:30:59 GMT
Lines: 62

rickert@cs.niu.edu (Neil Rickert) writes:

>Putnam discusses the many problems with Tarski's theory of truth, and
>with disquotational theories of truth, in his book "Representation
>and Reality".  It is worth reading.

Could you summarize some of the points here?

>     "Snow is white" is true if and only if snow is white
>is surely wrong.  A string, such as "Snow is white" does not have a
>truth value.  Strings don't have truth values.  The truth values
>apply to the semantic interpretation of the string, not to the string
>itself.

In the metatheory, I think it is easier to just think of statements as
just strings. A semantic interpretation is then a way of assigning
truth values to strings. Thus the statement

     "Snow is white" is true if and only if snow is white.

describes under what circumstances the string "Snow is white" is
to be given the value "true".

>>I think it is clear that the truth of a sentence in natural
>>language does *not* depend on the derivability of the sentence from
>>some set of axioms. If I make the claim "the cat is in the basement",
>>then the way to evaluate the truth of that claim is by checking the
>>basement for cats. It is *not* by trying to derive it. This doesn't
>>mean that axioms and definitions are irrelevant to the truth of the
>>claim---after all, we have to agree on what a "cat" is and what a
>>"basement" is. However, the axioms and definitions necessary to
>>understand the meaning of the sentence "the cat is in the basement"
>>are not sufficient to evaluate its truth.
>
>Daryl, I am rather surprised to hear you argue this.  You have been
>one of the strongest defender of AI on comp.ai.philosophy.  Now you
>are making a claim which implies that AI is bunk.  For if AI is
>possible, then an AI system will determine truth using computational
>procedures.

Or not at all. Some statements we don't know the truth of, and neither
would an AI. I don't know whether my cat is currently in the baseement,
and no amount of computation can answer the question. I have to go in
the basement and look.

>In essence, this implies that it will determine truth
>based on axioms and raw data.

Or not at all.

>By saying that truth is irreducible,
>you are arguing that human intelligence is irreducible -- or at least
>you are arguing that the part of human intelligence concerned with
>determining truth is not reducible to computation.

No, I don't make any claims for human's ability to determine truth,
either. We don't know the answers to all questions and we don't know
the truth of all statements.

Daryl McCullough
ORA Corp.
Ithaca, NY
