From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!uunet!mcsun!uknet!gdt!chpetk Thu Feb 20 15:21:21 EST 1992
Article 3792 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!uunet!mcsun!uknet!gdt!chpetk
>From: chpetk@gdr.bath.ac.uk (Toby Kelsey)
Newsgroups: comp.ai.philosophy
Subject: Re: Strong AI and Panpsychism
Message-ID: <1992Feb16.235110.26274@gdr.bath.ac.uk>
Date: 16 Feb 92 23:51:10 GMT
References: <6182@skye.ed.ac.uk> <1992Feb13.234630.1092@psych.toronto.edu> <419@tdatirv.UUCP>
Reply-To: chpetk@uk.ac.bath.gdr (Toby Kelsey)
Organization: School of Chemistry, University of Bath, UK
Lines: 51

>In article <1992Feb13.234630.1092@psych.toronto.edu> michael@psych.toronto.edu (Michael Gemar) writes:

>|But *why* is table lookup an "extreme"?   This is the problem that I have.
>|There seems to be some sort of implicit assumption about what kind
>|of functional organization is required to generate belief.  What I
>|am trying to uncover is what that assumption is.  If there *isn't* one
>|any deeper than "well, it's just *obvious* that table lookup can't
>|generate beliefs," then you are merely ruling them out ad hoc.       

Perhaps I am just being stupid, but I don't see what is
causing the problems with table lookup intelligence.

If you accept that a puppet remotely controlled by an
intelligent being isn't showing intelligence despite its
responses, since there is an intelligence (required)
elsewhere that is doing the hard work (so you are actually
measuring the intelligence of the remote being), why can't
you accept that responses from a precomputed table don't
show intelligence since there was an intelligence required
previously that did the hard work.

<wild speculation mode on>
On the other hand you could argue that all intelligence
requires a knowledge base and a pattern-matcher to apply
it. The table lookup is one extreme, the other extreme is an
intelligence that deduces the existence of everything from
first principles and forgets it within 10 minutes. Both
extremes are inefficient, while still intelligent, but if
efficiency plays any part in intelligence an intermediate
position will be superior. (Efficiency is required - any
problem that 'requires' intelligence can be solved by a dumb
trial-and-error approach eventually.)

On the third hand the whole argument becomes moot if your
definition of intelligence includes the ability to learn,
since table lookup doesn't. If the table contains a fake
learning curve, then that is a deliberate attempt at 
deception.

My uninformed opinion is that any definition of intelligence
includes speed and memory capacity. By sending off an intelligent
being for an unspecified time to create a table of undefined size,
you are effectively granting it infinite intelligence. If a
finite table is created in a finite time, I would expect the
intelligence displayed by the table to be very inefficient
w.r.t. the effort put in.

-- 
Toby Kelsey
School of Chemistry,  # JANET: chpetk@uk.ac.bath.gdr, otherwise
University of Bath.   # chpetk%gdr.bath.ac.uk@nsfnet-relay.ac.uk


