From newshub.ccs.yorku.ca!torn!cs.utexas.edu!wupost!darwin.sura.net!zaphod.mps.ohio-state.edu!pacific.mps.ohio-state.edu!linac!mp.cs.niu.edu!rickert Mon Aug 24 15:41:28 EDT 1992
Article 6666 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!wupost!darwin.sura.net!zaphod.mps.ohio-state.edu!pacific.mps.ohio-state.edu!linac!mp.cs.niu.edu!rickert
>From: rickert@mp.cs.niu.edu (Neil Rickert)
Subject: Re: Freewill, chaos and digital systems
Message-ID: <1992Aug20.183052.5388@mp.cs.niu.edu>
Organization: Northern Illinois University
References: <Bt4xt1.MA0.1@cs.cmu.edu> <1992Aug19.210204.29868@mp.cs.niu.edu> <Bt9Kq2.CLy.1@cs.cmu.edu>
Date: Thu, 20 Aug 1992 18:30:52 GMT
Lines: 55

In article <Bt9Kq2.CLy.1@cs.cmu.edu> rudis+@cs.cmu.edu (Rujith S DeSilva) writes:
>I prefer a more operational definition.  Let me suggest the following
>different flavours of freewill.  I assume that the brain/mind/soul obeys the
>`usual' physical laws.

>(a) All your `decisions' can be predicted in advance ...
>(b) All your decisions can be predicted .. [but not in advance]
>(c) Your decisions are pre-determined, but they are not predictable. ...

>(d) Your decisions are not pre-determined.  I think current quantum theory
>(specifically, the absence of `hidden variables') allows this possibility, but
>would like an expert opinion.
>
>Have I left out any possibilities?  What do you believe is the true situation?
>Personally, I don't have a clue, but I would not like it to be (a).  Actually,
>(d) is not too bad.

>From the point of view of a philosophical discussion, your (a), (b),
(c) seem essentially equivalent.  They all specify determinism, and only
differ on the computational feasibility of computing future behavior.

The argument I gave in <1992Aug19.210204.29868@mp.cs.niu.edu> shows that
determinism allow "free will" as a reasonable interpretation.  Presumably
you want something more absolute than a reasonable interpretation.  I
doubt that you will achieve it.  There is no support for free will in
quantum randomness.  If your behavior depends significantly on randomness,
then you have no free will.  Random behavior is certainly not my idea
of the exercise of free will.

>                     It could be that most of our decisions are based on our
>nature/nurture, with only the borderline cases being resolved randomly.

I have no problem with this.  This is consistent with my argument.
Those decisions for which I have a reason are made deterministically,
with dependence on the physical content of my brain that gives me those
reasons.  Those decisions for which I have no reason, I consciously
think of deciding randomly (although I might use the term arbitrarily),
so this is quite consistent with some random natural background information
being involved.  I really don't see that it make any great difference
whether the random information is "truly random" or is just pseudo-random
but in theory predictable.

>In this case, as a previous poster pointed out, are we morally responsible for
>our actions?

I prefer to look at moral responsibility from a pragmatic viewpoint.
Moral responsibility is an assumption of our culture, and of most
individuals in our culture.  It is written in many books (and Usenet
news articles).  As such, the assumption of moral responsibility becomes
part of the determining information that determines future behavior.
To put it bluntly, if you could wave a magic wand, and expunge all traces
of the doctrine of moral responsibility, you would change all future
human behavior, and you would probably have thereby expunged civilization
in that same act of magic.



