From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!uunet!mcsun!uknet!edcastle!aiai!jeff Tue Jan 21 09:27:40 EST 1992
Article 2946 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Cargo Cult Science
Message-ID: <6030@skye.ed.ac.uk>
Date: 21 Jan 92 00:54:45 GMT
References: <92Jan15.081805est.14473@neat.cs.toronto.edu>
Reply-To: jeff@aiai.UUCP (Jeff Dalton)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 15

In article <92Jan15.081805est.14473@neat.cs.toronto.edu> mgreen@cs.toronto.edu (Marc Green) writes:
>It's clear from the discussion that advocates of Strong-AI, and
>computer scientists in general, don't have much understanding of
>empirical science. The essense of science is refutability. For any
>hypothesis to be taken seriously, it must be open to refutation. This
>means that the advacates must spell out exactly what evidence they
>would take as contradictory to the hypothesis. Well, what evidence
>would refute Strong-AI? 

And we could ask: what evidence would refute the Turing Test?

Some people seem to think nothing.  Table lookup, sure!
Could be conscious.  Well, maybe so, but do you think there's
any way we could ever tell that something with the right 
behavior wasn't conscious?


