From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!qt.cs.utexas.edu!cs.utexas.edu!uunet!mcsun!uknet!edcastle!aiai!jeff Sun Dec  1 13:06:22 EST 1991
Article 1723 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!qt.cs.utexas.edu!cs.utexas.edu!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Arguments against Machine Intelligence
Message-ID: <5741@skye.ed.ac.uk>
Date: 28 Nov 91 18:40:32 GMT
References: <43772@mimsy.umd.edu> <288@tdatirv.UUCP>
Reply-To: jeff@aiai.UUCP (Jeff Dalton)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 10

In article <288@tdatirv.UUCP> sarima@tdatirv.UUCP (Stanley Friesen) writes:
>This also is my main problem with the 'anti-AI' crowd.  I have yet to see
>any properly defined, verifiable definitions of some property possessed by
>neurons that is not, or cannot, be programmed into a digital computer.

Searle, at least, doesn't need to do that.

If Searle is correct, there must be some property of the brain
that is necessary for understanding that is just part of instantiating
the right program.  It's an existence proof, in effect.


