From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!qt.cs.utexas.edu!yale.edu!jvnc.net!darwin.sura.net!Sirius.dfn.de!math.fu-berlin.de!news.netmbx.de!Germany.EU.net!mcsun!uknet!edcastle!aiai!jeff Tue May 12 15:48:34 EDT 1992
Article 5358 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!cs.utexas.edu!qt.cs.utexas.edu!yale.edu!jvnc.net!darwin.sura.net!Sirius.dfn.de!math.fu-berlin.de!news.netmbx.de!Germany.EU.net!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy
Subject: Re: Systems Reply I (repost perhaps)
Keywords: AI Searle Dickhead Barf
Message-ID: <6638@skye.ed.ac.uk>
Date: 1 May 92 17:47:15 GMT
References: <1992Mar29.083336.6608@ccu.umanitoba.ca> <6589@skye.ed.ac.uk> <523@tdatirv.UUCP>
Sender: news@aiai.ed.ac.uk
Organization: AIAI, University of Edinburgh, Scotland
Lines: 46

In article <523@tdatirv.UUCP> sarima@tdatirv.UUCP (Stanley Friesen) writes:
>In article <6589@skye.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
>|
>|No, but if we have good reasons to conclude that it can't
>|be duplicated by a computer, we still have these good reasons
>|even if we can't say how it's done in humans.

>So far no set of arguments against implementing
>a brain in digital circuits has been such as to preclude its applicability
>to humans as well.

You must have been reading different arguments than I have.

>|Unless you are willing to accept this point, there is no
>|point in continuing to discuss these matters with me,
>|because I am never going to agree that showing how it's
>|done in humans is necessary.
>
>It is because it is the only way of showing that the reasons you have advanced
>do *not* apply to humans.

I think you are confused.  Here's an example.

Suppose we have an argument like this:

   1. if P(x) then not can_do(x,task)
   2. P(computers)
   3. therefore not can_do(computers,task)

To show the argument does not apply to humans, it suffices to show
that P(humans) is flase.

To show that P(humans) is false it is not necessary to show _how_
humans manage to do the task in question.

Moreover, if we can conclude

   can_do(humans,task)

we can reason thus:

   1. if P(x) then not can_do(x,task)
   2. can_do(humans,task)
   3. therefore not P(humans)

-- jd


