From newshub.ccs.yorku.ca!torn!utcsri!rutgers!uwm.edu!cs.utexas.edu!sdd.hp.com!mips!darwin.sura.net!cs.ucf.edu!news Tue Jul 28 09:41:38 EDT 1992
Article 6476 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!utcsri!rutgers!uwm.edu!cs.utexas.edu!sdd.hp.com!mips!darwin.sura.net!cs.ucf.edu!news
>From: clarke@acme.ucf.edu (Thomas Clarke)
Newsgroups: comp.ai.philosophy
Subject: Re: How do computers fare on scholastic achievement tests?
Message-ID: <1992Jul17.160210.28920@cs.ucf.edu>
Date: 17 Jul 92 16:02:10 GMT
References: <1992Jul16.093057.8880@techbook.com>
Sender: news@cs.ucf.edu (News system)
Organization: University of Central Florida
Lines: 20

In article <1992Jul16.093057.8880@techbook.com> szabo@techbook.com (Nick Szabo)  
writes:
> How about let's consider a practical measure of intelligence, eg the
> SAT tests?  
> 
> This raises some questions along these lines:
> 
> * Which questions would be easiest for a computer?  Which the most
>   difficult?

I suspect the commonsense problem of reading and following the 
directions would be most difficult.  The SAT has only a small number
of types of questions; each type could be attacked and answered
fairly well by heuristic approaches as are used in chess programs. 

--
Thomas Clarke
Institute for Simulation and Training, University of Central FL
12424 Research Parkway, Suite 300, Orlando, FL 32826
(407)658-5030, FAX: (407)658-5059, clarke@acme.ucf.edu


