From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!swrinde!cs.utexas.edu!uunet!ogicse!das.harvard.edu!husc-news.harvard.edu!zariski!zeleny Fri Jan 31 10:26:39 EST 1992
Article 3229 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:3229 sci.philosophy.tech:1984
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!swrinde!cs.utexas.edu!uunet!ogicse!das.harvard.edu!husc-news.harvard.edu!zariski!zeleny
>From: zeleny@zariski.harvard.edu (Mikhail Zeleny)
Newsgroups: comp.ai.philosophy,sci.philosophy.tech
Subject: Re: Humongous table-lookup misapprehensions
Keywords: table-lookup,AI
Message-ID: <1992Jan28.164711.8184@husc3.harvard.edu>
Date: 28 Jan 92 21:47:08 GMT
References: <1992Jan25.224700.8656@ida.liu.se>
Organization: Dept. of Math, Harvard Univ.
Lines: 111
Nntp-Posting-Host: zariski.harvard.edu

In article <1992Jan25.224700.8656@ida.liu.se> 
c89ponga@odalix.ida.liu.se (Pontus Gagge) writes:

>This debate has continued beyond my endurance level as a normally
>passive reader. Avaunt, ye scurvey bandwidth complainers!

You won't find me among their scrofulous ranks.  The more, the merrier,
that's what I say.

>Pro primo: The table-lookup passes the Turing test by *definition*. There
>*is* no conversation which makes it fail the Turing test - to reveal that
>it is not a human. Whatever extra condition you pose (time, city, earlier
>conversations) may be met by simply augmenting the definition.

Not true.  Temporal considerations undermine the very idea of a static
table, requiring constant update thereof through some input mechanism.
Once this is admitted, you are faced with the problem of representing
contextual information in a fashion that lends itself to an application of
the same lexicographic ordering device used in the construction of the
original table.  At this point, the issue of knowledge representation rears
its ugly head.  Sorry, but until you find a way to deal with all this, the
question of table-lookup Turing-cheater has to be answered in the negative.

>Pro secundo: It (the finite, max-100-year variant) is equivalent to *a* 
>Turing machine which therefore passes the test. Thus it is surely
>relevant to the test-adequacy discussion. (For once, I not only welcomed
>an article by Mr. Zeleny, but actually *agreed*, as he recently
>pointed this out). The inputs are discrete; at each given point in
>the conversation a unique state is reached: therefore, we have a TM
>(indeed, a DFA).

I am afraid you misunderstood me.  My point was that, due to the contextual
factors, the "conversation state" can't be determined on the basis of words
alone.  Once again, the semantic and pragmatic issues have to be addressed
before you can declare the syntactical aspect to be under control.

>Pro tertio: Always remember that it is practically absurd, but
>theoretically possible. It is therefore unsuitable as an example of
>"fake" AI, as AI research will not produce it; but useful to a
>discussion of the Turing test.

Read it here: it's impossible IN THEORY.

>Right. Now *that* is out of my system. The one problem with this 
>"AI"-program is that it implements intelligent behaviour in an 
>*uninteresting* manner. It tells us nothing about intelligence. Its 
>(hypothetical) creator can always (hypothetically) predict what the 
>next reply will be.

Good thinking.  Alas, the same consideration will apply to all
deterministic machines.

>What is it then? An AI? I would say not. Where does it fail? Is it
>unintelligent? I would say not, as it will reply intelligently (and
>yes, I am guilty of the operationalist sin). However, that intelligence
>is not artificial; it is the intelligence of its creator.

Ditto.

>Consider: Let us imagine a single, infinitely dedicated, and
>infallible creator. We give her (should be PC :-)) a longevity drug 
>and put her in a timewarp with adequate time (say, a few billion years), 
>from which she returns with the table. We proceed to Turing test it. Now, 
>to any question or statement, the answer is certainly found in the table. 
>However, all cogitation which produced it was done by the creator.
>The table passes the test. What may we conclude? Why, that the *creator*
>can impersonate an intelligent person (despite that infinite
>dedication :-)). She has merely left a list of what to say in a given 
>situation, and let somebody act as a stand-in for her.

Very good.

>If you agree with my conclusion, and with strong AI, we have an 
>interesting result. It seems that an intelligence can be shared by
>an entity and a creator, in varying proportions; in the table-lookup
>the proportion is an unattainable 0 to 1; whereas a true AI programme
>would have 1 to 0. (The creator need be no moron (0 intelligence): we are 
>talking about the intelligence that passes the Turing test; the creator 
>may retain all his intelligence; it will merely not participate in the test).

This hasn't been shown.

>This would yield an entire new area to quibble over; how to estimate
>the amount of "canned" intelligence in a purported AI. Could this be
>done without inspection? Or is there no such "gradual" canning in the
>real (if future) world?

Simple: it has native intelligence, insofar, and inasmuch, as it can be
ascribed free will by an omniscient observer possessed of a perfect insight
into its construction and functioning.

>--
>/-------------------------+-------- DISCLAIMER ---------\
>| Pontus Gagge            | The views expressed herein  |
>| University of Link|ping | are compromises between my  |
>|                         | mental subpersonae, and may |
>| c89ponga@und.ida.liu.se | be held by none of them.    |
>\-------------------------+-----------------------------/


`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'
: Qu'est-ce qui est bien?  Qu'est-ce qui est laid?         Harvard   :
: Qu'est-ce qui est grand, fort, faible...                 doesn't   :
: Connais pas! Connais pas!                                 think    :
:                                                             so     :
: Mikhail Zeleny                                                     :
: 872 Massachusetts Ave., Apt. 707                                   :
: Cambridge, Massachusetts 02139           (617) 661-8151            :
: email zeleny@zariski.harvard.edu or zeleny@HUMA1.BITNET            :
:                                                                    :
'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`'`


