From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!batcomputer!cornell!rochester!yamauchi Tue Nov 26 12:31:55 EST 1991
Article 1541 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!batcomputer!cornell!rochester!yamauchi
>From: yamauchi@cs.rochester.edu (Brian Yamauchi)
Subject: A Behaviorist Approach to AI Philosophy
Message-ID: <YAMAUCHI.91Nov24030039@magenta.cs.rochester.edu>
Sender: yamauchi@cs.rochester.edu (Brian Yamauchi)
Nntp-Posting-Host: magenta.cs.rochester.edu
Organization: University of Rochester
Date: 24 Nov 91 03:00:39


Most discussions of AI philosophy tend to float away into the
stratosphere, so I'd like to offer a more concrete philosophical
question and find out how members of the anti-AI faction would
respond...

Suppose AI researchers could build a robot that was indistinguishable
from a human being in every way -- sensorimotor behavior, language
abilities, learning and reasoning powers, even physical appearance.
Would you argue that this robot is incapable of consciousness simply
because it was the product of human engineering rather than mutation
and natural selection?

If so, why?

If not, then it seems that any proof of the impossibility of "strong
AI" (broadly defined) requires a convincing argument why it is
impossible in theory to build such an anthropomorphic robot.

Of course, it's impossible to build such a robot with current
technology and probably will be for quite some time.  So AI advocates
have the harder task, in order to prove such a robot is possible, we
have to actually build one... :-)
--
_______________________________________________________________________________

Brian Yamauchi				NASA/Caltech Jet Propulsion Laboratory
yamauchi@cs.rochester.edu		Robotic Intelligence Group
_______________________________________________________________________________



