From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!usc!sdd.hp.com!samsung!uunet!mcsun!news.funet.fi!sunic!liuida!c89ponga Fri Jan 31 10:27:35 EST 1992
Article 3323 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:3323 sci.philosophy.tech:2000
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!usc!sdd.hp.com!samsung!uunet!mcsun!news.funet.fi!sunic!liuida!c89ponga
>From: c89ponga@odalix.ida.liu.se (Pontus Gagge)
Newsgroups: comp.ai.philosophy,sci.philosophy.tech
Subject: Re: Humongous table-lookup misapprehensions
Keywords: table-lookup,AI
Message-ID: <1992Jan31.025035.15035@ida.liu.se>
Date: 31 Jan 92 02:50:35 GMT
References: <1992Jan25.224700.8656@ida.liu.se> <1992Jan28.164711.8184@husc3.harvard.edu>
Sender: news@ida.liu.se
Organization: CIS Dept, Univ of Linkoping, Sweden
Lines: 175

(Sorry, this got rather lengthy, as I am too much a coward to
 abbreviate Mr. Zeleny; I have seen the sulphurous treatments
 people who misunderstand him publicly are likely to get)   :-) (maybe)

zeleny@zariski.harvard.edu (Mikhail Zeleny) writes:

>In article <1992Jan25.224700.8656@ida.liu.se> 
>c89ponga@odalix.ida.liu.se (Pontus Gagge) writes:

>>Pro primo: The table-lookup passes the Turing test by *definition*. There
>>*is* no conversation which makes it fail the Turing test - to reveal that
>>it is not a human. Whatever extra condition you pose (time, city, earlier
>>conversations) may be met by simply augmenting the definition.

>Not true.  Temporal considerations undermine the very idea of a static
>table, requiring constant update thereof through some input mechanism.
>Once this is admitted, you are faced with the problem of representing
>contextual information in a fashion that lends itself to an application of
>the same lexicographic ordering device used in the construction of the
>original table.  At this point, the issue of knowledge representation rears
>its ugly head.  Sorry, but until you find a way to deal with all this, the
>question of table-lookup Turing-cheater has to be answered in the negative.

I fail to see the deadly effect of temporal considerations. I assume you are
not merely considering ie questions of "What time is it", as this can be
remedied by the timer ticks someone (sorry, lost the source) proposed
incorporating into the input stream: where each tick changes the machine's 
state. All knowledge representation is in the machine state.

What other temporal effects may there be? I seem to recall your mentioning
the resolving of anaphoric pronouns, referring to earlier statements. But
if a pronoun occurs, why can't the table constructor resolve it before the
conversation takes place, as the definition states she does? Is there some
other class of language phenomena which actually demands that the conversation
be performed entirely in "real time", instead of during the construction
phase?

I would expect *deictical* pronouns to be a much greater problem, in
your view; as they directly connect real-world objects to language 
(correct me if I am wrong (if you need encouraging) :->). However, 
inasmuch as they can occur in a teletype conversation, the can be 
resolved by the constructor in the very same manner as other
pronouns.

Now, if you could show that we need an *infinite* set of improvements/
extensions to the scenario, you would have me convinced...

>>Pro secundo: It (the finite, max-100-year variant) is equivalent to *a* 
>>Turing machine which therefore passes the test. Thus it is surely
>>relevant to the test-adequacy discussion. (For once, I not only welcomed
>>an article by Mr. Zeleny, but actually *agreed*, as he recently
>>pointed this out). The inputs are discrete; at each given point in
>>the conversation a unique state is reached: therefore, we have a TM
>>(indeed, a DFA).

>I am afraid you misunderstood me. 

Did you not say that the table-cheater was equivalent to a DFA; this after
an attempt by an AI proponent to evade the thought experiment? If not, I
apologize. (But you *should* have said it :-).)
 
>                                  My point was that, due to the contextual
>factors, the "conversation state" can't be determined on the basis of words
>alone.  Once again, the semantic and pragmatic issues have to be addressed
>before you can declare the syntactical aspect to be under control.

But why cannot the human constructor determine the state by herself, in
advance of the actual conversation? Is there some way she cannot enumerate
all possible finite strings of words to produce only sensible conversations?
Do you not agree that all possible, and thus all sensible, conversations
shorter that the arbitrary limit 100 years is a finite (enumerable) set? 
Naturally, I agree that there are semantic and pragmatic aspects of the
conversation; but how do they affect the strings themselves? Exactly where
do the strings lack semantics? 

The way of dealing with the issues of meaning is to let our poor constructor 
take care of them. Does she not possess semantical understanding? Why can
she not construct this set of conversations? 

>>[My tertial point deleted]
>Read it here: it's [it=table construction; see, I can at least resolve
>anaphora *after* a conversation has taken place] impossible IN THEORY.

I am afraid you will have to show this.

>>Right. Now *that* is out of my system. The one problem with this 
>>"AI"-program is that it implements intelligent behaviour in an 
>>*uninteresting* manner. It tells us nothing about intelligence. Its 
>>(hypothetical) creator can always (hypothetically) predict what the 
>>next reply will be.

>Good thinking.  Alas, the same consideration will apply to all
>deterministic machines.

A good point. However, for some DFA:s this will be a stronger "hypothetical"
than for others; for the table-cheater it is fairly easy to define a
procedure to predict the answer (merely a lot of hard work leafing through
all those enumerations). Part of my definition of an *interesting* AI would
be that prediction would be rather harder, and would tell us something about
the nature of intelligence. (The remaining part I do not yet have. Perhaps
further discussion can remedy that. Perhaps.)

>>[I state that the intelligence is in the mind of the creator; thus,
>> we do not have an *A*I]

>Ditto.

>>[Narrative deleted; sketches the construction process in all its
>> immensity]
>>However, all cogitation which produced it was done by the creator.
>>The table passes the test. What may we conclude? Why, that the *creator*
>>can impersonate an intelligent person (despite that infinite
>>dedication :-)). She has merely left a list of what to say in a given 
>>situation, and let somebody act as a stand-in for her.

>Very good.

>>If you agree with my conclusion, and with strong AI, we have an 
>>interesting result. It seems that an intelligence can be shared by
>>an entity and a creator, in varying proportions; in the table-lookup
>>the proportion is an unattainable 0 to 1; whereas a true AI programme
>>would have 1 to 0.

>This hasn't been shown.

OK, I'll explicitly state an "intelligence continuum hypothesis", to
nail down what a Turing-cheat is;
  An intelligence may be in the mind of the beholder, the mind of
  the creator, and/or the mind of the tested entity, in any proportions,
  the sum of which is 1.

A relevant example would be a proto-AI, whose creator has endowed
it with a table-cheat for some problems (say, arithmetic) that it
does not handle well otherwise (it might be a chimpanzee-level
intellect (wow!)). Here at least I would ascribe some intelligence
to the system itself; the creator need not in practice be able to 
practically predict its next answer except in the domain of the cheat.

>>This would yield an entire new area to quibble over; how to estimate
>>the amount of "canned" intelligence in a purported AI. Could this be
>>done without inspection? Or is there no such "gradual" canning in the
>>real (if future) world?

>Simple: it has native intelligence, insofar, and inasmuch, as it can be
>ascribed free will by an omniscient observer possessed of a perfect insight
>into its construction and functioning.

"Simple"? Oh well, no practical test may exist. However, this is
saying, in effect, that the problem is undecidable (in a non-
mathematical meaning), which is disheartening to anyone using the
Turing Test as the operational criterion for awareness (or whatever
expression you prefer: intelligence, semantics, causal powers, whatnot).

BTW, I would expect any AI opponent such as Mr. Zeleny to welcome
the equivalence between the table-cheat and a DFA. But maybe there is
too great a price to pay for that victory: admission that an enumeration
of possible dialogues may exist. (Which is hard to refute, methinks.)

I would greatly welcome a discussion on this; as I do not really myself
have a good answer to the problem of Turing-cheating. If there is a
realizable cheat, the Turing Test would lose much, if not most of
its appeal. Would a suspicious tester always detect the cheats?

>`'`'`'`'`'`'`'`'`'  <---- Slightly elided version of the
>: Mikhail Zeleny :        well-known original.
>:                :
>'`'`'`'`'`'`'`'`'`

--
/-------------------------+-------- DISCLAIMER ---------\
| Pontus Gagge            | The views expressed herein  |
| University of Link|ping | are compromises between my  |
|                         | mental subpersonae, and may |
| c89ponga@und.ida.liu.se | be held by none of them.    |
\-------------------------+-----------------------------/


