From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!aunro!ukma!asuvax!ncar!noao!amethyst!organpipe.uug.arizona.edu!NSMA.AriZonA.EdU!bill Wed Feb 26 12:54:37 EST 1992
Article 4019 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!aunro!ukma!asuvax!ncar!noao!amethyst!organpipe.uug.arizona.edu!NSMA.AriZonA.EdU!bill
>From: bill@NSMA.AriZonA.EdU (Bill Skaggs)
Newsgroups: comp.ai.philosophy
Subject: Re: Strong AI and panpsychism
Message-ID: <1992Feb25.202744.27815@organpipe.uug.arizona.edu>
Date: 25 Feb 92 20:27:44 GMT
References: <1992Feb24.175920.16996@psych.toronto.edu> <1992Feb25.105322.24546@norton.com>
Sender: news@organpipe.uug.arizona.edu
Reply-To: bill@NSMA.AriZonA.EdU (Bill Skaggs)
Organization: Center for Neural Systems, Memory, and Aging
Lines: 44

In article <1992Feb25.105322.24546@norton.com> 
brian@norton.com (Brian Yoder) writes:
>
>This is nonsense. Arbitrary positions such as the one that rocks are intelligent
>should not be considered "possible"...they should be tossed out as meaningless.
>Of course I can't prove that there are not intelligent processes going on inside
>rocks, but then you can't expect me to prove negatives like that anyway. Where's
>your evidence that rocks have any intelligence?  Until you can come up with some,
>you have no business claiming that they might have some.
> 
  This is a misunderstanding.  Nobody is claiming that rocks are
intelligent.  The argument is that a certain definition of "intelligence"
that seems reasonable is actually not reasonable because it implies
that rocks (and everything else) are intelligent.

  I will briefly repeat the argument.  

  Proposed definition:  An object is "intelligent" if it implements
some sufficiently sophisticated set of programs.

  This raises at least two questions:  1) What is a program; 2) What
does it mean to "implement"?

  If you accept the (modified) Church-Turing thesis (which most people do),
a program can be identified with a finite state automaton, so the first
question is no problem.

  The obvious-seeming answer to the second question is that an
object implements a program (= FSA) if there is a mapping
from states of the object to states of the FSA such that
the state-transition rules of the Turing machine are respected by
the mapping.

  What is a "mapping"?  There is no ambiguity here:  "mapping" just
means a function, in the mathematical sense.

  Now Putnam's argument, which I will not repeat, is that this
seemingly natural definition is bad, because with such a broad
notion of implementation it can be shown that every physical object
(such as a rock) implements every program, or at least an enormous
set of programs.  The conclusion is that some more restrictive
notion of implementation is needed.   

	-- Bill


