From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rutgers!usc!wupost!sdd.hp.com!caen!garbo.ucc.umass.edu!dime!orourke Tue Jan 28 12:16:32 EST 1992
Article 3058 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rutgers!usc!wupost!sdd.hp.com!caen!garbo.ucc.umass.edu!dime!orourke
>From: orourke@unix1.cs.umass.edu (Joseph O'Rourke)
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence Testing
Message-ID: <42151@dime.cs.umass.edu>
Date: 23 Jan 92 15:47:56 GMT
References: <11774@optima.cs.arizona.edu>
Sender: news@dime.cs.umass.edu
Reply-To: orourke@sophia.smith.edu (Joseph O'Rourke)
Organization: Smith College, Northampton, MA, US
Lines: 8

In article <11774@optima.cs.arizona.edu> gudeman@cs.arizona.edu (David Gudeman) writes:
>..., I'm saying you have no reason
>at all to believe that a machine understands just because you can't
>stump it with hard questions.

I guess this is the essence of our difference:  to me this is very
good evidence for the hypothesis that the machine understands.
Perhaps I'm gullible.  What evidence would convince you?


