From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!caen!garbo.ucc.umass.edu!dime!orourke Tue Jan 28 12:17:53 EST 1992
Article 3156 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!caen!garbo.ucc.umass.edu!dime!orourke
>From: orourke@unix1.cs.umass.edu (Joseph O'Rourke)
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence Testing
Message-ID: <42286@dime.cs.umass.edu>
Date: 26 Jan 92 15:13:34 GMT
References: <11819@optima.cs.arizona.edu> <42196@dime.cs.umass.edu> <1992Jan24.161425.5929@aisb.ed.ac.uk>
Sender: news@dime.cs.umass.edu
Reply-To: orourke@sophia.smith.edu (Joseph O'Rourke)
Organization: Smith College, Northampton, MA, US
Lines: 49

In article <1992Jan24.161425.5929@aisb.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:

>Now, the idea that seeing a lot of questions successfully answered
>increases our confidence may be true 

That was my main point.  Agreement on the net is rare indeed, so I
should savor this.

>(though I don't know about increasing it _without bound_).  

This is less important, and less clear to me.  What I had in mind
is behavior similar to "Las Vegas" or "randomized" algorithms,
where any prespecified level of accuracy can be reached by
running the algorithm long enough.  Drop Buffon's needle on parallel
lines long enough, and you can approximate pi as closely as
you wish.

>But it's wrong to suppose that any old series of questions will do.  

I never believed otherwise, and I hope I didn't so imply.

>Teachers presumably try to
>ask the right sorts of questions, and they can get it wrong.

Right.  It takes deep understanding of the topic being probed, and
consumate skill at formulating appropriate questions.  But I believe
it is possible.

>Feynman's story about Brazilian physics is often used as an example.

Don't know it.  Care to elaborate?

>And if we consider the Turing Test applied to programs, the right
>question might be one about how the program works.  

This is where we part ways.  And where you agree with David Gudeman.
I was viewing the point of this Turing Test conversation as an attempt
to determine if understanding is present, independent of the
underlying mechanism.  So if, for example, you gain overwhelming
evidence that the machine you are querying understands (in the
sense of fully grasps meanings), and you believe on theoretical
grounds (along with Gudeman & Zeleny & Locke) that understanding is 
impossible without consciousness, then you should conclude that the
machine is very likely conscious.  
	This is perhaps my primary difference with those on the
other side of this debate:  I would be willing to draw that conclusion
(if I believed "understanding" ==> "consciousness"), whereas others 
would prefer to conclude that the machine, despite appearances, 
doesn't understand.


