From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!asuvax!ncar!noao!arizona!gudeman Tue Jan 28 12:18:31 EST 1992
Article 3202 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!asuvax!ncar!noao!arizona!gudeman
>From: gudeman@cs.arizona.edu (David Gudeman)
Newsgroups: comp.ai.philosophy
Subject: Re: Intelligence Testing
Message-ID: <11974@optima.cs.arizona.edu>
Date: 28 Jan 92 06:56:36 GMT
Sender: news@cs.arizona.edu
Lines: 17

In article  <1992Jan26.143717.3591@csc.canterbury.ac.nz> The Technicolour Throw-up writes:
]>From article <11906@optima.cs.arizona.edu>, by gudeman@cs.arizona.edu (David Gudeman):
]> I don't have any problem believing that machine intelligence is
]> possible, I just don't think you can say that some behavior is a sign
]> of intelligence when you can completely explain the behavior without
]> refering to intelligence.  That sort of belief is completely
]> unmotivated.  (Or motivated by sloppy thinking.)
]
]I take it therefore that you believe in dualism?

Ontology is irrevelant to the issue.  The issue is whether questions
can constitute an adquate test of understanding given the hypothesis
that the questions were answered through purely syntactic means.
--
					David Gudeman
gudeman@cs.arizona.edu
noao!arizona!gudeman


