From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!qt.cs.utexas.edu!yale.edu!jvnc.net!darwin.sura.net!Sirius.dfn.de!fauern!unido!mcsun!uknet!edcastle!aisb!jeff Fri Jan 31 10:26:45 EST 1992
Article 3239 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!qt.cs.utexas.edu!yale.edu!jvnc.net!darwin.sura.net!Sirius.dfn.de!fauern!unido!mcsun!uknet!edcastle!aisb!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence Testing
Message-ID: <1992Jan29.022411.11511@aisb.ed.ac.uk>
Date: 29 Jan 92 02:24:11 GMT
References: <11979@optima.cs.arizona.edu> <1992Jan28.220534.1523@gpu.utcs.utoronto.ca>
Sender: news@aisb.ed.ac.uk (Network News Administrator)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 17

In article <1992Jan28.220534.1523@gpu.utcs.utoronto.ca> pindor@gpu.utcs.utoronto.ca (Andrzej Pindor) writes:
>In article <11979@optima.cs.arizona.edu> gudeman@cs.arizona.edu (David Gudeman) writes:
>>     ....Therefore, _nothing_ you know can be more
>>certain than what you know by introspection alone.
>
>Although Mr. Zeleny objected to me calling this opinion solipsism, I believe
>that it is not very far from it. If the above is true, why bother to do 
>anything else but introspection? Why spend billions on scientific research
>instead of sitting in an armchair and introspecting? Have you seriously
>thought about logical consequences of this statement? 

This is surely too extreme.  That some things (eg, those known
by introspection) are more certain that things known any other way
does not imply that things known some other way are worthless.

It simply doesn't follow from what GD said that there's no point
in doing anything else.


