Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!pipex!lyra.csx.cam.ac.uk!sunsite.doc.ic.ac.uk!dcs.gla.ac.uk!unix.brighton.ac.uk!mjs14
From: mjs14@unix.brighton.ac.uk (shute)
Subject: Re: RACE and IQ
Message-ID: <1994Nov1.184222.20267@unix.brighton.ac.uk>
Organization: University of Brighton, UK
References: <38iuhh$ge@pheidippides.axion.bt.co.uk> <38n5f4$e55@disc.coactive.com> <3952p3$al@pheidippides.axion.bt.co.uk>
Date: Tue, 1 Nov 1994 18:42:22 GMT
Lines: 26

>David Gaw (dgaw@coactive.com) wrote:
>: As an extreme example  consider the test item "what is 3 + 3".
>: Learning a correct answer to this question clearly does not imply the  
>: subject has a full grasp on the concept of addition. [...]
>: Now one might argue that what I am describing above is a test of
>: *knowledge* of various sorts, not "Intelligence". 

In article <3952p3$al@pheidippides.axion.bt.co.uk> donald@srd.bt.co.uk (Donald Fisk) writes:
>What's the difference?   Probably the most important thing that has been learnt
>from Artificial Intelligence is that, for a program to be intelligent, it has
>to have lots of *knowledge*.   It seems reasonable to assume that the same
>applies for people too.   [...]

I was particularly struck by Marvin Minsky's 'definition' of intelligence
in the 'Will Robots Inherit the Earth?' article of the October Issue of
Scientific American.

(At this point, I'd better admit that I have not read "The Society of Mind"
where the idea might well have been introduced??)

So David's 3+3 test might still work if the program/human can be shown
to have more than one way of solving the problem.
-- 

Malcolm SHUTE.         (The AM Mollusc:   v_@_ )        Disclaimer: all

