Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!ix.netcom.com!netcom.com!jqb
From: jqb@netcom.com (Jim Balter)
Subject: Re: Strong AI and consciousness
Message-ID: <jqbD0D7ou.56I@netcom.com>
Organization: NETCOM On-line Communication Services (408 261-4700 guest)
References: <1994Dec2.143356.8747@oracorp.com> <3bq5oq$g06@news1.shell>
Date: Tue, 6 Dec 1994 01:19:41 GMT
Lines: 20

In article <3bq5oq$g06@news1.shell>, Hal <hfinney@shell.portal.com> wrote:
>A more important criticism of the AC approach is simply that there is no
>algorithmic way to calculate the algorithmic complexity of any given data
>set.  Because of the halting problem, there is no way of being sure that
>some program shorter than the best one known would generate the same
>data.  So at best we can give an upper bound to the AC of a string.  I
>still think this may be a useful concept in giving a foundation to our
>intuitive notions of reasonableness of theories and interpretations.

I don't want to comment on the rest of the AC issues, but I do take exception
to this.  The halting problem says that no Turing Machine can determine
whether every Turing Machine halts.  But that doesn't apply here.  We have a
program and a finite data set.  We can enumerate all shorter programs.  We can
analyze each of those shorter programs working on that finite data set.  The
HP does not say that we cannot determine, for each such case, whether it
terminates.  In fact, I'm fairly certain we always can.  (I'm way too rusty
and too lazy to prove such a thing, but if I'm wrong the HP is a much stronger
statement than necessary.)
-- 
<J Q B>
