Newsgroups: comp.ai,comp.ai.philosophy,sci.logic,sci.cognitive
Path: cantaloupe.srv.cs.cmu.edu!nntp.club.cc.cmu.edu!miner.usbm.gov!rsg1.er.usgs.gov!stc06.ctd.ornl.gov!fnnews.fnal.gov!usenet.eel.ufl.edu!news.mathworks.com!gatech!swrinde!tank.news.pipex.net!pipex!uknet!newsfeed.ed.ac.uk!edcogsci!usenet
From: jaspert@cogsci.ed.ac.uk (Jasper Taylor)
Subject: Re: Zeleny on predictability
In-Reply-To: zeleny@oak.math.ucla.edu's message of 31 Jul 1995 06:21:10 GMT
Message-ID: <JASPERT.95Jul31195234@scott.cogsci.ed.ac.uk>
Sender: usenet@cogsci.ed.ac.uk (C News Software)
Nntp-Posting-Host: scott
Organization: Centre for Cognitive Science, University of Edinburgh
References: <3ul3uc$u2t@saba.info.ucla.edu> <JASPERT.95Jul27130345@scott.cogsci.ed.ac.uk>
	<3vdskn$9m1@percy.cs.bham.ac.uk> <3vhsom$n2c@saba.info.ucla.edu>
Date: Mon, 31 Jul 1995 18:52:34 GMT
Lines: 131
Xref: glinda.oz.cs.cmu.edu comp.ai:32043 comp.ai.philosophy:31136 sci.logic:13366 sci.cognitive:8740


In article <3vhsom$n2c@saba.info.ucla.edu> zeleny@oak.math.ucla.edu (Michael Zeleny) writes:

>>>> I like to think so.  Diagonalization does not buy you anything,
>>>> since semantics, in contradistinction from syntax, cannot be
>>>> diagonalized over.

> (AS)
>> I'd like to see a proof of this. There are those who would dispute
>> the possibility of semantics that cannot be expressed
>> syntactically.  But I guess it depends what you mean by
>> "semantics". Are you talking about non-denumerable sets?

> Non-denumerability enters into it by means of the same diagonal
> argument that establishes the non-recursiveness of the relation of
> semantic interpretation via its arithmetization `a la Goedel.  The
> most apposite argument is due to Rich Thomason (in J.Halpern, ed.,
> _Reasoning about Knowledge_, Morgan Kaufmann, 1986), but the basic
> idea is familiar from the work of Tarski.  To make it more explicit,
> consider a recursive function F : P --> B, which maps predictions to
> behaviors.  As before, F is effectively decidable and representable
> in a system with enough math, leading to unpleasant consequences for
> your side.  As regards those who would dispute the obvious, I shall
> limit myself once again to citing Church: there is no known upper
> bound on human stupidity.

I think Calvin puts it better; "...it's a fallacy that taste bottoms
out somewhere. If the broadcasters could find a way to aim even lower,
they'd make some _real_ money!" (Bill Watterson, "Calvin and Hobbes")

You appear to be saying that if you know the syntactical manipulations
of a prediction-defying machine, you can feed it a prediction that in
some way depends on Goedel's proof to comprehend; the machine will
fail to derive the proof, and hence have no way of avoiding the
predicted output.

Now I haven't read the paper Aaron mentioned about this (in AI
journal, 74(2)) but it seems to me that people invent logical systems
to formalize regularities in their intuitions (and I don't understand
why D. Longley thinks the intuitions underlying FOPC are more reliable
than those behind modal, temporal, nonmonotonic, etc. logic.) So
seeing Goedel's proof is intuitively valid, we humans can formalize
its steps into a new calculus which allows it to be derived --- and
sadly with a different Goedel sentence of its own. But since the
machine has its formal system built in, it has a finite (but
arbitrarily long) Goedel sentence (to build a machine with a GS of a
certain length, keep adding the shorter ones to its axioms).

Now I can only hold Goedel's proof in my mind for a few minutes before
it decays and I have to learn it again. I doubt that anyone has ever
managed to understand the GS of a system which includes the standard
GS as an axiom (rather than just being aware of the notion). So I
suspect that faced with the kind of predictive skill we are
considering using against our poor machine, the human intuition of
unpredictability is little more than the intuition that we can behave
randomly (in this case, stop coming up with new formal systems at an
unexpected point). In which case (a) we can't (try calling heads or
tails at random for a while then do a statistical analysis of the
resulting string) and (b) if we could, how do we hide our randomizer
from the predictor? 

> (AS)
>> Why do you have this faith in introspective evidence? How exactly
>> do you do the introspection?
>> 
>> Does it require special training?
>> 
>> Do you just shut your eyes, ask yourself a question, and hear the
>> result in your mind's ear?
>> 
>> Or is it done via pictures in your mind's eye?
>> 
>> Or are you generalising from recollections of predictions you have
>> confuted in the past?
>> 
>> Or is it that you simply discover that you have the belief in
>> unpredictability whenever you think about it?
>> 
>> Or do you do experiments, like predicting what you are going to do
>> next, and then find that you are always able to confute them?
>> 
>> Or do get other people to do the predictions and then find that you
>> are able to confute them? (What about the predictions Jasper and I
>> make?)

> Do you seriously expect me to dignify this juvenilia with a serious
> answer, or are you just trying to trivialize this discussion beyond
> all measure?

You can't blame Aaron, he is obviously just defective in his own
introspective faculty. Go on, humour him with an answer.

>> Just what sorts of things do you think can be discovered by
>> introspection?

> Any faculty relevant to cognition, perception, and volition, as
> distinguished from belief, sensation, and desire.  That includes the
> usual rationalist bag of tricks -- the cogito, the real distinction,
> conditional knowledge of the external world and other minds, the
> categorical imperative, and so on.  Personal immortality, direct
> communications from God, and the soul's capacity for disembodied
> existence are not on the agenda.  Sorry about that.

S'funny, I keep getting the last three as well...why should they be
less valid than the others? BTW, God says Hi. 

> (AS)
>> Which of one's abilities cannot?
>> 
>> (E.g. some people can't even tell by introspecting that they are
>> angry, or jealous, or infatuated, or pompous, or confused, even
>> when it is perfectly evident to others.)

> You really ought to address this one to John Searle -- he is the one
> insisting on ultimate transparency of all mental content.  I take it
> that raw emotions, phenomenal feels, reflexive twitches, and other
> such things, can well be opaque to our best introspective efforts,
> in so far as they are devoid of cognitive import.  The rule of thumb
> is: whatever I can and should be held responsible for, is eo ipso
> introspectively accessible.

Isn't this usually the other way round; the definition of that for which
we should be held responsible as that which we commit through acts of
will (and isn't all this unpredictability stuff just free will by
another name?)
--
Jasper Taylor                        | /www.cogs  /   |  A politically-correct
Human Communication Research Centre  | /       c  t   |  joke is like an
University of Edinburgh              | :pt  de.i  rep |  environment-friendly
2 Buccleuch Place, Edinburgh, UK.    |   t  .       s |  stinkbomb.
Phone (44) 31 650 4450               |   h  ac.uk/~ja |
