From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!cs.utexas.edu!asuvax!ncar!noao!amethyst!organpipe.uug.arizona.edu!NSMA.AriZonA.EdU!bill Tue Feb 11 15:25:56 EST 1992
Article 3604 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!bonnie.concordia.ca!uunet!cs.utexas.edu!asuvax!ncar!noao!amethyst!organpipe.uug.arizona.edu!NSMA.AriZonA.EdU!bill
>From: bill@NSMA.AriZonA.EdU (Bill Skaggs)
Newsgroups: comp.ai.philosophy
Subject: Re: Strong AI and panpsychism
Message-ID: <1992Feb10.000321.26668@organpipe.uug.arizona.edu>
Date: 10 Feb 92 00:03:21 GMT
References: <1992Feb3.113723.2519@arizona.edu> <1992Feb4.151115.5600@news.media.mit.edu> <1992Feb6.113740.2533@arizona.edu> <1992Feb8.033821.16351@news.media.mit.edu>
Sender: news@organpipe.uug.arizona.edu
Reply-To: bill@NSMA.AriZonA.EdU (Bill Skaggs)
Distribution: world,local
Organization: Center for Neural Systems, Memory, and Aging
Lines: 48

Bill Skaggs:
>But what sorts of mappings are allowed?  If any arbitrary, time-
>dependent mapping is acceptable, then the dynamics of *any*
>object can be mapped to *any* Turing machine -- so every
>object is simultaneously performing every possible computation.
>
>Therefore, if we want to avoid rabid panpsychism, we must restrict
>the set of allowable mappings -- but I would claim that restricting
>the set of mappings amounts to grounding the system.
>
>P.S. I understand that Putnam has made essentially the same
>argument somewhere.

Marvin Minsky:
>Can you explain what you [Putnam] means by a time-dependent mapping?
>If it means what I fear, there's no reason to call it a mapping.  On
>the other hand, non-time-dependent state-automata naturally fall into
>equivalence classes under isomorphism, which you could regard as
>"grounding" if you want. I presume that Putnam's time dependent
>mappings are determined by another automaton?  He must have something
>to keep all the successive states from being indistinguishable?

Aargh.  This is a nightmare of miscommunication -- largely my
fault.

First of all, I have been informed (by Mikhail Zeleny) that
Putnam's argument (which can be found in his book "Representation
and Reality") refers to FSA's rather than Turing machines, and
exploits the fact that no real object is ever in the same
state twice to remove the need for time-dependence of the 
mapping.

But this is not really essential to the point I'm trying to
make.  Let me try again:

You have said that consciousness (thinking, intelligence,
whatever) is a property of programs.  I more or less agree,
but the problem is that we want to attribute consciousness,
or lack of it, to *things* such as organisms or computers.
We say, then, that a thing is conscious (or whatever) if
it is implementing the right sorts of programs.  But
what does this actually *mean*?

What I have been trying to do (rather ineptly) in this
thread is to provoke you into explaining what it means
to you for a thing to implement a program.

	-- Bill


