Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!pipex!uunet!olivea!news.hal.COM!decwrl!netcomsv!netcom.com!vlsi_lib
From: vlsi_lib@netcom.com (Gerard Malecki)
Subject: Re: Bag the Turing test (was: Penrose and Searle)
Message-ID: <vlsi_libD0KCL7.C6t@netcom.com>
Organization: VLSI Libraries Incorporated
References: <jqbD0DG73.4uu@netcom.com> <D0GFxv.5zL@gpu.utcc.utoronto.ca> <3c5nml$370@news1.shell>
Date: Fri, 9 Dec 1994 21:48:43 GMT
Lines: 68

In article <3c5nml$370@news1.shell> hfinney@shell.portal.com (Hal) writes:
>>argued this very convincingly in terms of optimization.
>
>I don't think an HLT (humongous lookup table) has to have a complex
>search algorithm.  As I understand the example, the HLT consists of a
>table listing every possible conversational sequence and an appropriate
>reply.  For example, one entry in the table might be:
>
>Conversation_Thus_Far:
>
>	"Hello?"
>	"Hi.  Shall we get started?  What would you like to ask me?"
>	"First of all, are you human?"
>	"That would be telling, wouldn't it?"
>	"Well, are you conscious?"
>
>Appropriate_Reply:
>
>	"It seems to me that I am."
>
>The HLT consists of a "humongous" number of such pairs, one for every
>possible Conversation_Thus_Far.
>
>The search algorithm for this HLT is simplicity itself.  Simply record
>the conversation and look through the HLT for a match in the
>Conversation_Thus_Far field, then print out the corresponding
>Appropriate_Reply.  Linear search will work although it would be slow.
>Hans Moravec suggests a data structure composed of a tree where each
>letter of the conversation is at one node with pointers to all possible
>next-letters, with each terminal node pointing at the reply.
>This could be searched very quickly although you better have lots of
>memory to hold it!  (Hans suggests using baby universes in his message
>here.)
>
>There is no need for a decision process while using the HLT in order to
>have a "personality".  The personality is determined by the words in
>the Appropriate_Reply field.  The program which uses the HLT is only a
>few lines of code, and the HLT itself consists solely of possible
>conversations and replies.  I maintain that reasonable people will
>believe that such a beast is not conscious in the sense that you and I
>are, and that they are not necessarily confused or in error.
>
>Hal Finney
>hfinney@shell.portal.com

Kolmogorov complexity theory says it really doesn't matter if the
information is encoded in the data (in this case, the HLT) or in the
program generating it. Even thought the size of the HLT is potentially
infinite, the amount of information encoded in it would be finite if
the entries are determined by a finite set of deterministic rules. In 
that case, the entropy of the HLT would be near zero. Of course, we
can make the information content infinite if we use a true random
number generator (for example, radioactive decay) to select between
two or more possible replies for the next question, although I doubt
if this extra element is going to be of much significance for the 
Turing Test.

But the above example offers an interesting theoretical viewpoint. If
our (infinite size) HLT has two possible replies for every current 
outstanding question, both derived from finite deterministic rules,
and we have a (relatively short) program
that uses a true random number generator to select between the two
replies, Kolmogorov complexity theory says that most of the information
content (intelligence?) of the overall system is in the program that
does the choosing and not in the HLT!

Shankar Ramakrishnan
shankar@vlibs.com
