From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!darwin.sura.net!jvnc.net!netnews.upenn.edu!libra.wistar.upenn.edu Tue Jan 28 12:16:13 EST 1992
Article 3037 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!usc!wupost!darwin.sura.net!jvnc.net!netnews.upenn.edu!libra.wistar.upenn.edu
>From: weemba@libra.wistar.upenn.edu (Matthew P Wiener)
Newsgroups: comp.ai.philosophy
Subject: Re: Evidence that would falsify strong AI. (Re: Cargo Cult Science)
Message-ID: <63305@netnews.upenn.edu>
Date: 22 Jan 92 23:06:28 GMT
References: <92Jan15.081805est.14473@neat.cs.toronto.edu> <1992Jan22.010051.6409@hilbert.cyprs.rain.com>
Sender: news@netnews.upenn.edu
Reply-To: weemba@libra.wistar.upenn.edu (Matthew P Wiener)
Organization: The Wistar Institute of Anatomy and Biology
Lines: 33
Nntp-Posting-Host: libra.wistar.upenn.edu
In-reply-to: max@hilbert.cyprs.rain.com (Max Webb)

In article <1992Jan22.010051.6409@hilbert.cyprs.rain.com>, max@hilbert (Max Webb) writes:
>In article <92Jan15.081805est.14473@neat.cs.toronto.edu> mgreen@cs.toronto.edu (Marc Green) writes:
>>This means that the advacates must spell out exactly what evidence they
>>would take as contradictory to the hypothesis. Well, what evidence
>>would refute Strong-AI? 

>Evidence that we can _universally_ solve the halting problem would
>work.

That's a ridiculously strong requirement.  Where do you place things
like recognizing the consistency of PA?  This is an instance of the
halting problem, and humans seem to have done it.

>Simply come up with a good characterization of what our capabilities
>are, and prove that no turing machine can emulate them. Such strong
>formal results have been achieved, but only between two formal models.
>You don't have one of the brain. Neither do I. But we _ARE_ getting
>closer.

Again, you make it ridiculously strong.  One of our capabilities is
mathematics, and there are proofs regarding how much mathematics a
Turing maching can do.

>The waveforms of simple neural nets are being _duplicated_. Lesion
>behavior is being duplicated. Entire nervous systems of simpler
>animals are being simulated down to the details of behavior. Unless
>you think we are made of entirely different stuff, then this is strongly
>suggestive that we are simulable as well.

Not in the least.  The distance to go is far greater than the distance
gone.
-- 
-Matthew P Wiener (weemba@libra.wistar.upenn.edu)


