Newsgroups: comp.ai,comp.ai.philosophy,sci.logic,sci.cognitive
Path: cantaloupe.srv.cs.cmu.edu!europa.chnt.gtegsc.com!news.mathworks.com!zombie.ncsc.mil!simtel!harbinger.cc.monash.edu.au!news.uwa.edu.au!DIALix!sydney.DIALix.oz.au!quasar!telford
From: telford@threetek.dialix.oz.au (Telford Tendys)
Subject: Re: FIRST order?
In-Reply-To: zeleny@oak.math.ucla.edu's message of 18 Jul 1995 07:42:33 GMT
Message-ID: <1995Jul21.071239.23579@threetek.dialix.oz.au>
Organization: 3Tek Systems Pty Ltd., N.S.W., Australia
References: <jqbDBsunG.C6H@netcom.com> <3ual7e$b6g@saba.info.ucla.edu> <jqbDBu09v.G9H@netcom.com> <3ufol9$7sg@saba.info.ucla.edu>
Date: Fri, 21 Jul 1995 07:12:39 GMT
Lines: 56
Xref: glinda.oz.cs.cmu.edu comp.ai:31745 comp.ai.philosophy:30686 sci.logic:12874 sci.cognitive:8460

> From: zeleny@oak.math.ucla.edu (Michael Zeleny)
> 
> As I said before, the program's solutions can be effectively computed
> from its design and implementation.  The complexity of the task makes
> no difference to the matter of principle.

If the task is complex enough that it cannot be done in finite time
the this makes a difference. In this case the program may be a formal
system but there is no way to know (even theoretically) what that system is.

> By definition, a properly
> written deterministic (possibly including all kinds of pseudorandom
> devices) program will comprise the most efficient means of predicting
> its own behavior.

So to know what a robot does, build a robot and watch what it does.

To formalise the robot requires feeding it every possible input.
Obviously this direct method is not computable.
You can write a formula to ENUMERATE every possible input
but then formalisation would require mapping this enumeration to
the output enumeration which is only possible if you can calculate
robot behaviour for CLASSES of inputs (rather than a single specific
input). Building a robot and watching what it does is not a suitable
method for resolving CLASSES of inputs (not in the general case anyhow).
(If it were it would be nice because testing code for correct behaviour
would be much easier).

This seems to imply that a general robot CANNOT (even in theory) be
formalised in a finite number of steps.

> And as a matter of fact, we have no assurance of an
> extant counterpart for predicting human solutions on the basis of any
> amount of relevant information concerning their genetic and cultural
> provenance, personal background, socioeconomic role, or biological
> function.  More importantly, introspective evidence suggests that any
> prediction of human behavior can be easily confuted by its subject,
> once it is made available thereto.  In principle, no such ability to
> frustrate the best effort of surmising its future behavior on the basis
> of its design and initial conditions, can be imputed to any mechanism.
> So if our introspective conclusions are valid, minds are not machines.
> This much we know already.

Whoa there matey, you give the human access to the results
of the prediction but the machine does not get such access.
So one is playing poker with all hands visible and the other with
all hands hidden. To make a valid competition, all entities must be
playing the same game.

More than that, qualify the words `any mechanism' please;
my pair of dice are plenty able to `frustrate the best effort of surmising
its [their] future behavior'. You must restrict yourself to
predicting the future behaviour of digital, synchronous mechanisms
(like desktop computers for example).

	- Tel
