From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!news.cs.indiana.edu!bronze!chalmers Mon Mar  9 18:33:39 EST 1992
Article 4118 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.ecf!utgpu!csd.unb.ca!morgan.ucs.mun.ca!nstn.ns.ca!news.cs.indiana.edu!bronze!chalmers
>From: chalmers@bronze.ucs.indiana.edu (David Chalmers)
Subject: Re: Strong AI and panpsychism
Message-ID: <1992Feb28.081704.20687@bronze.ucs.indiana.edu>
Organization: Indiana University
References: <1992Feb17.221311.17990@oracorp.com>
Date: Fri, 28 Feb 92 08:17:04 GMT
Lines: 51

In article <1992Feb17.221311.17990@oracorp.com> daryl@oracorp.com writes:

>You have missed the possibility that "causal organization" might not
>be an objective property.  [Analogy with entropy deleted.]

All I can say to this is that if causal organization is not an
objective property, then it can't be part of a theory about the
physical basis of consciousness, because there is certainly
a fact of the matter about whether a system is conscious or not
(if there weren't, i.e. if consciousness were just a matter of
interpretation, I wouldn't care enough about it to be building
theories).  Personally, I don't doubt that there's an objective
notion of causal organization.  (There are probably many, and
there are probably a few subjective notions as well.  It's
choosing the right one that'a the tricky part.)

>Your complaint about clocks, that they don't support counterfactuals,
>is I think, easily corrected: for example, consider a machine M with a
>state determined by a pair: the time, and the list of all inputs ever
>made (with the times they were made). If "implementation" simply means
>the existence of a mapping from the physical system to the FSA, then
>it seems that such a system M would simultaneously implement *every*
>FSA. Counterfactuals would be covered, too.

This is an interesting example, which also came up in an e-mail
discussion recently.  One trouble with the way you've phrased it is
that it doesn't support outputs (our FSAs have outputs as well as
inputs, potentially throughout their operation); but this can be
fixed by the usual "humongous lookup table" method.  So what's to
stop us saying that a humongous lookup table doesn't implement
any FSA to which it's I/O equivalent?  (You can think of the table
as the "unrolled" FSA, with new branches being created for each
input.  To map FSA states to (big disjunctions of) table states,
simply take the image of any FSA state under the unrolling process.)
This is a tricky question.  Perhaps the best answer is that it
really doesn't have the right state-transitional structure, as it
can be in a given state without producing the right output and
transiting into the appropriate next state, namely when it's at the
end of the table.  Of course this won't work for the implementation
of halting FSAs (i.e. ones that must halt eventually, for any inputs,
but one could argue that the FSA which describes a human at a given
time isn't a halting FSA (the human itself might be halting, but
that's because of extraneous influences on the FSA).  Your example
above doesn't have the problem at the end of the table; it just goes
on building up it's inputs forever, but at cost of being able to
produce the right outputs.

-- 
Dave Chalmers                            (dave@cogsci.indiana.edu)      
Center for Research on Concepts and Cognition, Indiana University.
"It is not the least charm of a theory that it is refutable."


