From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!uunet!psgrain!percy!nosun!hilbert!max Tue Jan 28 12:15:31 EST 1992
Article 2992 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!news-server.csri.toronto.edu!rpi!usc!cs.utexas.edu!uunet!psgrain!percy!nosun!hilbert!max
>From: max@hilbert.cyprs.rain.com (Max Webb)
Subject: Evidence that would falsify strong AI. (Re: Cargo Cult Science)
Message-ID: <1992Jan22.010051.6409@hilbert.cyprs.rain.com>
Summary: Here it is.
Organization: Cypress Semiconductor Northwest, Beaverton Oregon
References: <92Jan15.081805est.14473@neat.cs.toronto.edu>
Date: Wed, 22 Jan 1992 01:00:51 GMT

In article <92Jan15.081805est.14473@neat.cs.toronto.edu> mgreen@cs.toronto.edu (Marc Green) writes:
>This means that the advacates must spell out exactly what evidence they
>would take as contradictory to the hypothesis. Well, what evidence
>would refute Strong-AI? 

Evidence that we can _universally_ solve the halting problem would
work. Evidence that we can solve some NP complete problem optimally,
in the general case would work. (or at least convince me...)

	OR, if that is too hard...

Simply come up with a good characterization of what our capabilities
are, and prove that no turing machine can emulate them. Such strong
formal results have been achieved, but only between two formal models.
You don't have one of the brain. Neither do I. But we _ARE_ getting
closer.

That satisfies the falsifiability criteria. Now quit bitching and
get to work.

> Yet, Strong-AI types keep
>making claims about what will be accomplished. At this point, their
>credability is a bit thin. If you don't believe me, just ask DARPA.

The waveforms of simple neural nets are being _duplicated_. Lesion
behavior is being duplicated. Entire nervous systems of simpler
animals are being simulated down to the details of behavior. Unless
you think we are made of entirely different stuff, then this is strongly
suggestive that we are simulable as well.

>because to him, empirical evidence is not an issue. It's the same with
>Strong-AI adovates: they are so obviously right that there is no
>evidence which could contradict them. They don't understand that in
>science, it's up to you to disprove the null hypothesis. 

See above.

> The real issues, like the one raised by Smith in
>"The Owl and the Electronic Encyclopedia" or by Lakoff in "Women, Fire
>and Dangerous Things" are never discussed. Lakoff actually uses
>empirical evidence to argue against strong AI.

Haven't read it - so why don't you summarize it here. I hope it is
better than Penrose's tripe. People get into trouble when they try
to circumscribe the potential of a field not their own.

>Marc Green
>Trent University

Regards,
	Max


