Newsgroups: comp.ai,comp.ai.philosophy,sci.logic,sci.cognitive
Path: cantaloupe.srv.cs.cmu.edu!europa.chnt.gtegsc.com!news.mathworks.com!gatech!swrinde!tank.news.pipex.net!pipex!uknet!newsfeed.ed.ac.uk!edcogsci!usenet
From: jaspert@cogsci.ed.ac.uk (Jasper Taylor)
Subject: Re: Zeleny on predictability
In-Reply-To: zeleny@oak.math.ucla.edu's message of 22 Jul 1995 19:53:31 GMT
Message-ID: <JASPERT.95Jul27130345@scott.cogsci.ed.ac.uk>
Sender: usenet@cogsci.ed.ac.uk (C News Software)
Nntp-Posting-Host: scott
Organization: Centre for Cognitive Science, University of Edinburgh
References: <3ukmgh$7lr@percy.cs.bham.ac.uk> <3ul3uc$u2t@saba.info.ucla.edu>
	<3uqhae$bao@percy.cs.bham.ac.uk> <3urkvr$99v@saba.info.ucla.edu>
Date: Thu, 27 Jul 1995 12:03:45 GMT
Lines: 78
Xref: glinda.oz.cs.cmu.edu comp.ai:31909 comp.ai.philosophy:30950 sci.logic:13199 sci.cognitive:8609


In article <3urkvr$99v@saba.info.ucla.edu> zeleny@oak.math.ucla.edu (Michael Zeleny) writes:

[I'm trying to summarize the debate betwen Aaron (AS) and Michael about the
possibility of an unpredictable computer program here]

> Your allegation of successful refutation is based entirely in a
> logical misunderstanding of the nature of admissible predictions.

[...]

> It is not so trivial a task to understand an explicit prediction of
> oneself sufficiently well to be able to generate some behavior that
> falsifies it.  I claim that we have this ability, and argue that no
> deterministic automaton can equal us in this respect.

I don't think humans have this ability in such an undeniable, God-given
kind of way. Supposing I were to make my prediction in Gaelic, or some
language you don't understand, you couldn't be sure of confuting
it. And once we start limiting the kinds of prediction allowable, the
design of the program starts looking a lot easier.

>>> ..It is hardly relevant that you can frustrate any attempt to
>>> write down a literal string purporting to anticipate your
>>> program's output.  In other words, if output = f(input), then I am
>>> seeking a fixpoint input_p such that describes(input_p) =
>>> f(input_p) for any number of conceivable description functions,
>>> rather than the fixpoint input_p = f(input_p), easily frustrated
>>> by your above specification of the program f.

> (AS)
>> OK. Then how can you be sure that no matter what my function f is
>> there is a FINITE fixpoint? Surely you cannot?

> Nor do I have to.  If there is no finite fixpoint, a finite state
> automaton never halts.  But bear in mind that the halting problem
> for any FSA is decidable.  Not so the description problem, by a
> Tarskian diagonal argument.

Does this mean that if you're allowed an infinite string as input,
some interpretation of that string could be a description of the
program's behaviour on reading it? Since humans aren't very good with
infinite strings, it might be difficult to tell whether the prediction
was successful.

> (AS)
>> Now if you are going to allow predictions whose specification is
>> infinitely long we are in a different ballpark. I'll have to
>> produce a function f that works like a diagonalizer. It should not
>> be too difficult. But I haven't thought it through. Have you?

> I like to think so.  Diagonalization does not buy you anything,
> since semantics, in contradistinction from syntax, cannot be
> diagonalized over.  And pure syntax is patently insufficient for
> recognizing and confuting any prediction.

And if you allow predictions of the semantic content of the
program's output, the human can lose on the foreign language
example. If you allow predictions of the _pragmatic_ content, then I
can beat you now! I predict that you will claim your reply to this
post confutes this prediction!

> I chose my words very carefully.  The claim is that introspective
> evidence [of spontaneity concomitant with rational agency] suggests
> that any prediction of human behavior can be easily confuted by its
> subject, once it is made available thereto.  In principle, no such
> ability to frustrate the best effort of surmising its future
> behavior on the basis of its design and initial conditions, can be
> imputed to any mechanism [caeteris paribus].  So if our
> introspective conclusions are valid, minds are not machines. [...]

Seems our introspective conclusions are pretty dodgy sometimes. 

--
Jasper Taylor                        |   _____       |  A politically-correct
Human Communication Research Centre  |  |_   _| |_   |  joke is like an
2 Buccleuch Place, Edinburgh, UK.    |    | |_____|  |  environment-friendly
Phone (44) 31 650 4450               |               |  stinkbomb.
