Newsgroups: comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!bb3.andrew.cmu.edu!news.sei.cmu.edu!cis.ohio-state.edu!magnus.acs.ohio-state.edu!math.ohio-state.edu!howland.reston.ans.net!ix.netcom.com!netcom.com!jqb
From: jqb@netcom.com (Jim Balter)
Subject: Re: RACE and IQ
Message-ID: <jqbCynv11.5tF@netcom.com>
Organization: NETCOM On-line Communication Services (408 261-4700 guest)
References: <38iuhh$ge@pheidippides.axion.bt.co.uk> <38n5f4$e55@disc.coactive.com>
Date: Wed, 2 Nov 1994 22:12:37 GMT
Lines: 47

In article <38n5f4$e55@disc.coactive.com>, David Gaw <dgaw@coactive.com> wrote:
>In article <38iuhh$ge@pheidippides.axion.bt.co.uk> donald@srd.bt.co.uk (Donald  
>Fisk) writes:
>     > David Christopher Swanson (dcs2e@darwin.clas.Virginia.EDU) wrote:
>     > : It seems at least possible to me that a test could be based on
>     > : the assumption that the test-taker has not practiced precisely
>     > : the activities used in the test, and that if he has practiced
>     > : them his test results will be inaccurately high (inaccurately
>     > : as regards his intelligence in real life situations).
>     > : As a practical question, however, I know that a lot of people
>     > : do take the test more than once (I took it twice in the third
>     > : grade), and some may practice extensively.  Even those who take
>     > : it only once may practice first; the type of questions used is
>     > : no big mystery.  Maybe these practices ARE giving inaccurate
>     > : results.
>     > 
>     > Practice at IQ tests does indeed boost your IQ.   What's the
>     > problem with that?   If somebody has a French or Software
>     > Engineering exam, we don't say that their results are inaccurate
>     > because they have studied for the exam beforehand, or practised
>     > on past papers, so why should things be any different for IQ?
>     > 
>     > : David
>     > 
> I believe the reason that practicing invalidates the test (in the eyes of the  
>test-makers/givers) is that the test is designed to assess performance on a  
>*class* of concepts. A test item is just an instance of a concept it is testing  
>for. By practicing, the subject masters that *item*, but may or may not have  
>really increased mastery of the category that item is meant to represent.
>
>I am sure test-designers try to address this sort of thing, but to some extent  
>it seems impossible. As an extreme example  consider the test item "what is 3 +  
>3". Learning a correct answer to this question clearly does not imply the  
>subject has a full grasp on the concept of addition. But maybe the subject DOES  
>fully understand addition. Only assessing performance on multiple instances of  
>the "addition concept" can tell for sure. 
>
>To wrinkle this further... What if the subject gets 99/100 of the "what is x +  
>y" questions right ? Do they "understand the concept of addition" ??
>
>Now one might argue that what I am describing above is a test of *knowledge* of  
>various sorts, not "Intelligence".  Maybe so, but I think the ideas generalize  
>to Intelligence functions as well..... Agree ??

Sort of like the Turing Test ...
-- 
<J Q B>
