From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!ccu.umanitoba.ca!access.usask.ca!alberta!kakwa.ucs.ualberta.ca!unixg.ubc.ca!ubc-cs!uw-beaver!micro-heart-of-gold.mit.edu!rutgers!ub!zaphod.mps.ohio Tue Nov 26 12:31:32 EST 1991
Article 1502 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:1502 sci.philosophy.tech:1059
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!ccu.umanitoba.ca!access.usask.ca!alberta!kakwa.ucs.ualberta.ca!unixg.ubc.ca!ubc-cs!uw-beaver!micro-heart-of-gold.mit.edu!rutgers!ub!zaphod.mps.ohio
-state.edu!wupost!uunet!mcsun!uknet!edcastle!aiai!jeff
>From: jeff@aiai.ed.ac.uk (Jeff Dalton)
Newsgroups: comp.ai.philosophy,sci.philosophy.tech
Subject: Re: Daniel Dennett
Message-ID: <5678@skye.ed.ac.uk>
Date: 22 Nov 91 19:39:09 GMT
References: <1991Nov18.083024.5560@husc3.harvard.edu> <15019@castle.ed.ac.uk> <1991Nov19.210047.5646@husc3.harvard.edu> <15112@castle.ed.ac.uk>
Reply-To: jeff@aiai.UUCP (Jeff Dalton)
Organization: AIAI, University of Edinburgh, Scotland
Lines: 74

In article <15112@castle.ed.ac.uk> cam@castle.ed.ac.uk (Chris Malcolm) writes:
>In article <1991Nov19.210047.5646@husc3.harvard.edu> zeleny@brauer.harvard.edu (Mikhail Zeleny) writes:

>On the way I have also managed to collect some idea of what you think
>is wrong with the ideas of these foolish AI supporters: it has
>something to do with an implicit assumption that Man is finite, based
>on some presumed relationship between Man and a Turing Machine.  Your
>answer is not entirely clear, ignores my illustration of the infinite
>capability of Turing Machines and seems to me to suggest, as before,
>that you assume that a Turing machine has finite capabilities.

Haven't you noticed that MZ keeps referring to FSAs?  FSAs are,
of course, finite.  TMs are not.  But I think it's reasonable to
say the brain is finite.  So humans are weaker than TMs and hence
if a TM can't do something then a fortiori a human-as-machine
can't either.  There may be some problems involving different
senses of finite and infinite to untangle here, but I hope the
general idea is more or less clear.

>>The implications of Searle's argument are painfully obvious: semantical
>>knowledge must be represented in, and accessible by, the mind of any
>>intelligent being.  Pray tell, where are these issues adequately addressed?
>
>I don't think anybody is yet capable of addressing them. They are
>generally recognised as serious issues in the AI community (which is
>precisely _why_ the Chinese Room gets anthologised and debated so
>much), and some people are working on them, despite being handicapped
>by intellectual dishonesty :-)

Unfortunately, many in the AI community don't stop at trying to
demonstrate that Searle has _failed to show_ that machines with
certain behavior could not "understand" (add the usualy qualifications
about merely by running the right program, etc).  Instead, they go on
to argue that certain kinds of machines with the right behavior _would_
understand.  I don't think that, given the current state of knowledge,
etc, they can be in a position to reach that conclusion.

So far as I can tell, they do this in part out of a mistaken
"behaviorist" belief that if one plays by the right scientific 
rules there could never be any evidence that such a machine didn't
understand.

Examples such as David Gudeman's decision trees can be used as
"intuition pumps" (Dennett's term) to try to undermine that belief
by showing how it might matter "how it works".  (Can pumps be used
to undermine?  Let's suppose they can.)

This puts the behaviorists in the position of saying either that we
can never find out how it works (an obscurantist position which ought
to be unattractive to them) or that it can't possibly matter (something
we're not yet in a position to know).

A common response is to attack the examples.  For instance, it
might be pointed out that the decision tree program couldn't
actually work well enough.  (Cf claims that the Chinese Room
couldn't have the desired behavior because it lacks temporary
storage.)  

That particular attack doesn't work very well, because the person
making it holds the position that, with the right program, computers
could work well enough, even though no one knows how.  If they can ask
someone else to prove their example can work well enough, why can't
the same be asked of them?

Or it might be claimed that, for all we know, humans work in the
same way as an example that we're supposed to think would not count
as understanding.  But this is just an argument of the "Searle has
not proved his case" sort.  Maybe computers can understand because
maybe (ie, for all we know) humans work the same way (in all relevant
respects).  The argument does nothing to show that machines with
the right behavior would understand, only that -- for all we know --
they might.

-- jd


