Newsgroups: comp.ai,comp.ai.philosophy,sci.logic,sci.cognitive
Path: cantaloupe.srv.cs.cmu.edu!rochester!udel!gatech!swrinde!sdd.hp.com!hplabs!hplntx!curry
From: curry@hpl.hp.com (Bo Curry)
Subject: Re: Zeleny on predictability
Sender: news@hpl.hp.com (HPLabs Usenet Login)
Message-ID: <DCLEIC.E01@hpl.hp.com>
Date: Mon, 31 Jul 1995 18:11:00 GMT
References: <3ul3uc$u2t@saba.info.ucla.edu> <JASPERT.95Jul27130345@scott.cogsci.ed.ac.uk> <3vdskn$9m1@percy.cs.bham.ac.uk> <3vhsom$n2c@saba.info.ucla.edu>
Nntp-Posting-Host: saiph.hpl.hp.com
Organization: Hewlett-Packard Laboratories, Palo Alto, CA
X-Newsreader: TIN [version 1.2 PL2]
Followup-To: comp.ai,comp.ai.philosophy,sci.logic,sci.cognitive
Lines: 23
Xref: glinda.oz.cs.cmu.edu comp.ai:32042 comp.ai.philosophy:31135 sci.logic:13365 sci.cognitive:8739

Michael Zeleny (zeleny@oak.math.ucla.edu) wrote:
: >>>...And pure syntax is patently insufficient for
: >>>recognizing and confuting any prediction.

: (AS)
: >I think I gave perfectly good counter examples to that in the form
: >of programs that do nothing but syntactic manipulation and thereby
: >refute any explicit prediction you make about what they are next
: >going to print out. Of course, the machine will not be able to
: >confute all predictions, but neither can humans.

: Your solution fails to account for the problem of interpretation,
: which is undecidable from the foregoing considerations.

Aren't you just shifting the ground, here? Aka "begging the question"?
It appears that your claim about confuting predictions
ultimately rests on the (in my view prior) claim that the
interpretation of such predictions is *impossible* through
"syntactic manipulation". If that's so, then you can't use
your purported ability to confute predictions as evidence for
the latter claim.

Bo
