From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!utgpu!watserv1!watdragon!logos.waterloo.edu!cpshelle Tue Jan 28 12:16:32 EST 1992
Article 3057 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!utgpu!watserv1!watdragon!logos.waterloo.edu!cpshelle
>From: cpshelle@logos.waterloo.edu (cameron shelley)
Subject: Re: Strong AI and panpsychism (was Re: Virtual Person?)
Message-ID: <1992Jan23.153722.6392@watdragon.waterloo.edu>
Sender: news@watdragon.waterloo.edu (USENET News System)
Organization: Evil Designs Inc.
References: <1992Jan23.015152.510@psych.toronto.edu>
Date: Thu, 23 Jan 1992 15:37:22 GMT
Lines: 37

michael@psych.toronto.edu (Michael Gemar) writes:
[...]
> It is this panpsychism which functionalism seems to imply which makes me
> *very* nervous.  I will agree that the above is not a *logical* argument
> against Strong AI, but it certainly should cause its advocates to pause and
> consider to what, at root, their position commits them (the ethical problems
> alone boggle the mind!).

Indeed it isn't a *logical* argument, and it ignores the role
*persistence* plays in describing what must in part be a processual
phenomenon.  I may not be too clear by what is meant by functionalism
here, but functional theories of language (which is the sort of thing
I'm more familiar with) are not limited to a limp description of
competence, but also describe performance and the relationship 
between them.  (I think language theory is relevant here, since I
assume language is an intelligent behaviour.)

Indeed, persistence of information over time in any system is the key
to describing its behaviour as coherent.  To take your example, a
roomful of air might indeed `momentarily' form a mindlike structure
(though the odds render this possibility remote), but that structure
has no persistence (by virtue of the inherent randomness of air under
normal conditions), and will not therefore be a `mind'.  Entropy 
triumphs again!

Any natural language program has to deal with this problem.  The
solution is, of course, memory.  

I may not be up on all notions of `strong AI', but any which don't
account for this sort of thing are surely untenable.

				Cam
--
      Cameron Shelley        | "Syllogism, n.  A logical formula consisting
cpshelle@logos.waterloo.edu  |  of a major and a minor assumption and an
    Davis Centre Rm 2136     |  inconsequent."
 Phone (519) 885-1211 x3390  |				Ambrose Bierce


