From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!utgpu!watserv1!watdragon!logos.waterloo.edu!cpshelle Wed Feb  5 11:56:32 EST 1992
Article 3433 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!utgpu!watserv1!watdragon!logos.waterloo.edu!cpshelle
>From: cpshelle@logos.waterloo.edu (cameron shelley)
Subject: Re: Strong AI and panpsychism (was Re: Virtual Person?)
Message-ID: <1992Feb3.160554.722@watdragon.waterloo.edu>
Sender: news@watdragon.waterloo.edu (USENET News System)
Organization: Evil Designs Inc.
References: <1992Jan31.233453.7625@news.media.mit.edu>
Date: Mon, 3 Feb 1992 16:05:54 GMT
Lines: 100

minsky@media.mit.edu (Marvin Minsky) writes:
[...]
> I completely agree with Michael Gemar about this.  I, too, have been
> sorely puzzled why so many people insist that being situated in the
> world -- or having some kind of "grounding" for "meaning" -- has
> anything to do with consciousness or semantics or such things.  No
> matter that many distinguished philosophers have said so.  We can
> manufacture unbelievably intersting "virtual" worlds.

I think there are a number of fallacies here, relative to this 
thread.

First of all, I think you are placing me in a particular school
of philosophy regarding the `symbol-grounding' problem, and then
engaging in a strawman attack.  I believe my statements about
non-locality in this thread have not indicated that I support
a law-abiding view of relations between symbols and `ground'.
Rather, I think a functional description of consciousness is
valid, but meaningless without constraints, as supplied by *an*
environment.  Any environment sufficiently constrained relative
to the level of functional description would do, I'm sure.
Unfortunately for me, I don't have a precise definition of the
`sufficiency' to offer, just examples (eg, evolution by natural
selection).

Micheal's statements about consciousness, to date and to my
understanding, have been based on the assertion: any collection of
material may even transistionally attain a conscious functional
structure with a non-zero probability.  Because this constraint is
vanishingly small, the number of permutations is infinitely high,
leading to discussion of everything being conscious.  If the notion of
consciousness supposed is binary (eg, on/off), as Micheal seems to
have it, then this reduces it to nonsense.  Basing a theory simply on
a non-zero probability reduces the amount of information resulting
from it to merely non-zero as well.  I don't find that too
constructive.  My assertion is that if we instead do not ignore
entropy and non-locality, we get a much more powerful result, at least
in terms of predictive power.  I believe this is what a scientific
model should do. 

The association apparently suggested by Minsky above between 
consciousness and semantics is also one I have not claimed, but I
think that's part of the strawman being refuted there.  Semantics
is more likely a set of internal constraints conditioned by
situatedness both locally and non-locally, nature and nurture if
you prefer.  As I've repeated before in this thread, I am not
suggesting that consciousness exists purely by external action,
rather by *both* subjective and objective criteria.

I have also not appealed to the authority of "many distinguished
philosophers", but I think this is also part of the strawman argument. 

> What's more, I don't even see why those formal systems even need to be
> run on real computers, if they are specified complete with their
> environments.  Those virtual beings, just as "conscious" as me and
> (presumably) you, can lead arbitrarily rich, imaginative lives, or
> whatever.

I not sure what you're saying here, but if you're equating `program'
with `process', then I must disagree.

> When you close your ideas and try to prove Fermat's last Theorem
> your internal actions rapidly become less and less "situated" in the
> room you're in.  Yet still you think.  Because of memory.  And it
> would not matter if those memories can from some "genuine, early,
> experience" or if someone just inserted a new ROM in your brain.

I think this relies on an extremely narrow view of situated.  As I
have indicated before in this thread, memory is part of non-locality
because time is inextricable from situation.  Anyway, a constraint-
based theory would allow fluctuations in `situatedness', provided it
is given the information-rich basis that I mentioned above.  And as I
also remarked previously, the origin of memories is immaterial, since
it is the progression from present to future upon which the
constraints are based. 

> Furthermore, I don't see any reason why anyone has even to write down
> those programs in the first place, so long as they are among those
> that would be generated by, say, an exhaustive -program-enumeration
> program.  The "situated" dogma is one thing, but it seems to me that
> even the "X exists" predicate is redundant for such discussions.  The
> important thing is, whatmachinery could produce what sorts of minds,
> under what circumstances -- and any connection with some "real" world
> seems quite arbitrary.

Then what are `circumstances'?  How does this machine `produce'?  What
is a `machine', in this case?  Your statements about the redundancy of
ontology seem riddled with ontological terms.  If I read your text
aright, it sounds like your view of consciousness is worse than
Micheal's: not only is consciousness monadic, it is essentially static. 
Similar competence-only theories of language have proven insufficient
to describe observable phenomena, and I suspect such theories of
consciousness will suffer the same fate. 

				Cam
--
      Cameron Shelley        | "Syllogism, n.  A logical formula consisting
cpshelle@logos.waterloo.edu  |  of a major and a minor assumption and an
    Davis Centre Rm 2136     |  inconsequent."
 Phone (519) 885-1211 x3390  |				Ambrose Bierce


