Newsgroups: comp.ai,comp.robotics,comp.ai.philosophy
Path: cantaloupe.srv.cs.cmu.edu!fs7.ece.cmu.edu!hudson.lm.com!news.pop.psu.edu!news.cac.psu.edu!howland.reston.ans.net!pipex!uunet!juniper.almaden.ibm.com!garlic.com!nic.scruz.net!earth.armory.com!rstevew
From: rstevew@armory.com (Richard Steven Walz)
Subject: Re: Minsky's new article
Organization: The Armory
Date: Sat, 10 Dec 1994 13:34:17 GMT
Message-ID: <D0LKD9.177@armory.com>
References: <3c4acv$o12@jetsam.ee.pdx.edu> <3c648h$62c@jetsam.ee.pdx.edu> <3c68nf$5g2@mp.cs.niu.edu> <3c78j8$b44@jetsam.ee.pdx.edu>
Sender: news@armory.com (Usenet News)
Nntp-Posting-Host: deepthought.armory.com
Lines: 138
Xref: glinda.oz.cs.cmu.edu comp.ai:25858 comp.robotics:16170 comp.ai.philosophy:23500

In article <3c78j8$b44@jetsam.ee.pdx.edu>,
Marcus Daniels <marcus@ee.pdx.edu> wrote:
>rickert@cs.niu.edu (Neil Rickert) writes:
>
>>In <3c648h$62c@jetsam.ee.pdx.edu> marcus@ee.pdx.edu (Marcus Daniels) writes:
>>>rickert@cs.niu.edu (Neil Rickert) writes:
>
>>>A logical consequence of your position is that you don't
>>>think learning be implemented deterministically.
>
>>I don't understand this at all.  Could you give an argument as to why
>>this is a logical consequence?
>
>Because a determinstic machine doesn't have free will in
>the strongest objective sense.  There is one future, and one
>future only for certain inputs.  Clearly, scientists must
>learn and your view makes learning (discovering causal knowledge)
>contingent on free will.  But now you say learning could proceed
>under a determinstic system.
>
>Or perhaps I take you too strongly:  learning could be implemented
>on a deterministic machine, but it wouldn't be `as good' as
>human `free' learning.
--------------------------------------
Marcus, are you of the, I believe, very distorted opinion that free will
has anything to do with learning, and that determinism does NOT, or that
determinism being true would IMPEDE learning? Learning IS and CAN BE a
completely mechanical activity without any sort of erroneously conceived
"free-will", you know! Learning has nothing to do with self concept at all,
if we wished to build a robot which could learn which did not have a sense
of self-being! Example, an idiot savant, as depicted by the Dustin Hoffman
flick in which he has incredible powers of memory and calculation. Now this
IS what happens when the learning mechanism in humans has no
"self-awareness" function to "get in the way" of learning for no particular
purpose or for the ongoing process of "self-cognition". But what I have
been saying, and I think showing, over and over, is that there is no true
"free-will" even in us, as our "self-awareness" function does not need the
ability to generate choice from total randomness, nor would such a
mechanism as "whim" without describable "reason" be useful and more likely
it would INTERRUPT the sense of self-existence more than cause it. I think
this is the BIG MISTAKE that people make about what SOUNDS like a "good
idea" (TM) on the surface, (free-will"), but turns out, upon close
examination to be quite disruptive and even impossible when analyzed! We do
NOT pull a "rabbit out of a hat" when making decisions or choices. Our
choices are entirely as "mechanical" in a deterministic sense as is any
other process in nature. One problem for westerners is that mentioned by
the old chief in "Little Big Man" (This must be Dustin Hoffman night!), and
that was, "The indian sees that everything is alive and that everything has
a soul, but the white man thinks that everthing is dead, so that he goes
out and kills everything to try to make it so." In other words, if we were
as willing to accept that everything in the world was aware in some manner,
by natural fact, namely that even a rock, falling down the cliff, is
enduring an experience, and learning as its rough edges are being ground
away or battered off, and that it also lives, as do all other animals and
things, then perhaps westerners would not have so much trouble with
understanding that a robot can be made as conscious and "full of 'soul' as
are we", and that the word "mechanism" or "mechanical" is a term used to
deride something for being dead, when it has been alive all the time, just
a bit less complexly than we are! If we can finally see that the world is
not dead and we the only thing alive, but that everything is alive and that
our own aliveness comes from the assemblage of our parts and their
interaction as well, speaking to one another, then we would see immediately
that anything we can embue with obvious awareness, is as alive and real to
itself as we are to ourselves inside! Free will is not necessary,
undoubtedly harmful if it COULD be brought about, and actually impossible
in reality! That we all think that we exist is not the result of having
some "contrary" "free will" but simply because when the parts come together
in the right arrangement, they are pieces of the "great spirit" and not
dead things! If we are made of the great spirit, and there is no world
outside this "body of the great spirit" in which and from which we live,
then if the parts come together in a right way, so that it works, then it
is alive! And thus any robot or computer is also eligible for the same
beinghood as we possess already, and it doesn't require so-called "free"
will to obtain! "Free" is only a word that has meaning between humans
relating to imprisonment and laws and freedoms. It has no meaning inside
the individual mind, because awarenss thinks that it has "done" this or
that, and that is normal and its proper place, because it is impossible to
be aware without taking responsibility for actions, even if the actions
would have been decided without the part that is aware and takes
responsibility. Awareness is a delusion about the way the parts work
together to become "us". It is not "wrong", but it has drawbacks logically
of which we are better to understand! There is not such a thing as "free"
will, as the will is not free to decide anything it chooses on a "whim". It
decides with all the parts and then it takes the view that it is the one
who made the decision and that it is the "person". That 'it' is the part
that does all thinking and deciding, instead of being merely the part that
claims this credit or blame! That's what awareness IS, simply an illusion,
but that is not some "bad thing" (TM). That is just a simple fact! We will
do what the Sun does with the earth rotating in its orbit, we will "come up
in the morning"!! In other words, the view of being "self-aware" and
"making choices" is a matter of reference point, and still wrong, just as
imagining the sun "comes up" is wrong, but it is a "useful fiction" for a
verbal and semantic shorthand we call language and other structures between
"individuals"!!! (For in actuality we are none of us truly "individual",
but are wired together under the fabric of the world!)
-Steve Walz
 
>>Keep in mind that there are two different meanings for
>>'deterministic'.  We can talk about whether the universe is
>>deterministic, although it seems unlikely that such determinism could
>>ever be proved.  And, given a causal system (such as a computer
>>running a particular program), we can talk about whether that
>>system's behavior is uniquely fixed by its inputs.
-----------------------
Human behavior, given only one order and manner and content of all inputs,
would live exactly the same life over and over! To say that it wouldn't is
not "free will". It is either a different life, via the vicissitudes of
Heisenberg, or it is the same life, given that even the rest of the
universe would occur as it did, since it can only do one thing, namely what
it did, looking back on it from within a life!
-Steve Walz

>Even with the first case, even with multiple histories/futures, etc.
>I still can't conceive of any way that free will makes 
>sense computationally.  It would take more than an ultimate breakthrough
>in reflection theory, and it seems conceivable, that such enabling subsystems
>could be found, or counter-examples could be found in the brain.  
>
>>>                                           What would be an operational
>>>definition of learning `working'?  Didn't we agree the only sensible
>>>measure was utility?
>
>>Presumably one could say that an entity learns if, as a response to
>>experience, it modifies its behavior so as to improve utility.
>
>Would that not be rational in the same sense as a free human?
>In your view, isn't this machine, however programmed,
>still operating within the bounds of the programmers subjective
>perspective?  I'd expect that if it can learn and reason it can create
>new learning strategies, and eliminate the less effective,
>original algorithims.  The starting state might be interesting
>from a psychological/historical POV, but philosophical?
-----------------------------------------
Perhaps you do understand this. No matter how interesting it may be to talk
about different scenarios and choices, and "woulda, shoulda, coulda", it
cannot be, and the program can only be run once!!
-Steve Walz   rstevew@armory.com

