Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!swrinde!pipex!uunet!olivea!news.hal.COM!decwrl!netcomsv!netcom.com!jqb
From: jqb@netcom.com (Jim Balter)
Subject: Re: Bag the Turing test (was: Penrose and Searle)
Message-ID: <jqbD0KIu6.G17@netcom.com>
Organization: NETCOM On-line Communication Services (408 261-4700 guest)
References: <jqbD0DG73.4uu@netcom.com> <D0GFxv.5zL@gpu.utcc.utoronto.ca> <3c5nml$370@news1.shell> <vlsi_libD0KCL7.C6t@netcom.com>
Date: Sat, 10 Dec 1994 00:03:42 GMT
Lines: 84

In article <vlsi_libD0KCL7.C6t@netcom.com>,
Gerard Malecki <vlsi_lib@netcom.com> wrote:
>In article <3c5nml$370@news1.shell> hfinney@shell.portal.com (Hal) writes:
>>>argued this very convincingly in terms of optimization.
>>
>>I don't think an HLT (humongous lookup table) has to have a complex
>>search algorithm.  As I understand the example, the HLT consists of a
>>table listing every possible conversational sequence and an appropriate
>>reply.  For example, one entry in the table might be:
>>
>>Conversation_Thus_Far:
>>
>>	"Hello?"
>>	"Hi.  Shall we get started?  What would you like to ask me?"
>>	"First of all, are you human?"
>>	"That would be telling, wouldn't it?"
>>	"Well, are you conscious?"
>>
>>Appropriate_Reply:
>>
>>	"It seems to me that I am."
>>
>>The HLT consists of a "humongous" number of such pairs, one for every
>>possible Conversation_Thus_Far.
>>
>>The search algorithm for this HLT is simplicity itself.  Simply record
>>the conversation and look through the HLT for a match in the
>>Conversation_Thus_Far field, then print out the corresponding
>>Appropriate_Reply.  Linear search will work although it would be slow.
>>Hans Moravec suggests a data structure composed of a tree where each
>>letter of the conversation is at one node with pointers to all possible
>>next-letters, with each terminal node pointing at the reply.
>>This could be searched very quickly although you better have lots of
>>memory to hold it!  (Hans suggests using baby universes in his message
>>here.)
>>
>>There is no need for a decision process while using the HLT in order to
>>have a "personality".  The personality is determined by the words in
>>the Appropriate_Reply field.  The program which uses the HLT is only a
>>few lines of code, and the HLT itself consists solely of possible
>>conversations and replies.  I maintain that reasonable people will
>>believe that such a beast is not conscious in the sense that you and I
>>are, and that they are not necessarily confused or in error.

Not necessarily, depending upon exactly why they believe it.  But if they
believe it *because* the complexity is in the data rather than the code,
then I would say they are in error concerning the relevance of that distinction.
Even considering the relative role of structure versus memory in the human
brain should make us wary about this distinction between "code" and "data".

>Kolmogorov complexity theory says it really doesn't matter if the
>information is encoded in the data (in this case, the HLT) or in the
>program generating it. Even thought the size of the HLT is potentially
>infinite, the amount of information encoded in it would be finite if
>the entries are determined by a finite set of deterministic rules. In 
>that case, the entropy of the HLT would be near zero.

It seems to me the same considerations apply to human brains.

>Of course, we
>can make the information content infinite if we use a true random
>number generator (for example, radioactive decay) to select between
>two or more possible replies for the next question, although I doubt
>if this extra element is going to be of much significance for the 
>Turing Test.

Certainly the *appearance* of randomness is needed to pass the TT, if the
tester knows to repeat questions.  Of course, since every new question
leads to a new state, no real randomness is necessary.  (Even if the test
is applied more than once, the HLT can simply maintain state across tests
rather than resetting each time.  Just like a real human.)

>But the above example offers an interesting theoretical viewpoint. If
>our (infinite size) HLT has two possible replies for every current 
>outstanding question, both derived from finite deterministic rules,
>and we have a (relatively short) program
>that uses a true random number generator to select between the two
>replies, Kolmogorov complexity theory says that most of the information
>content (intelligence?) of the overall system is in the program that
>does the choosing and not in the HLT!

If so, this sounds like a problem for KC.
-- 
<J Q B>
