Newsgroups: alt.philosophy.objectivism,alt.sci.physics.new-theories,sci.logic,comp.ai,comp.ai.philosophy,sci.philosophy.meta,alt.memetics
Path: cantaloupe.srv.cs.cmu.edu!bb3.andrew.cmu.edu!newsfeed.pitt.edu!gatech!newsfeed.internetmci.com!in2.uu.net!utcsri!utgpu!pindor
From: pindor@gpu.utcc.utoronto.ca (Andrzej Pindor)
Subject: Re: Open Letter to Professor Penrose
Message-ID: <DLCBKn.FAn@gpu.utcc.utoronto.ca>
Organization: UTCC Public Access
References: <4d0vqs$n8p@hpindda.cup.hp.com> <4dgpl0$im0@news.ox.ac.uk> <4dh0du$frve@hopi.gate.net> <4dhgfb$6dh@lastactionhero.rs.itd.umich.edu>
Distribution: inet
Date: Wed, 17 Jan 1996 19:31:34 GMT
Lines: 69
Xref: glinda.oz.cs.cmu.edu sci.logic:16628 comp.ai:36029 comp.ai.philosophy:36734 sci.philosophy.meta:23253

In article <4dhgfb$6dh@lastactionhero.rs.itd.umich.edu>,
Gregory T Stevens <gregs@umich.edu> wrote:
>Dan Hankins (dhankins@gate.net) wrote:
>: Patrick Juola (patrick@gryphon.psych.ox.ac.uk) wrote:
>: : and check.  On the other hand, we *can* very easily look at a computer
>: : program and demonstrate that no matter what the program "chose", it could
>: : not have chosen otherwise (or that the reason for its chosing was directly
>: : as a response outside of its will, like an external random generator).
>
>: Let's be careful with "could have chosen otherwise".  This need not mean 
>: that the past could actually have been different; it needs only mean 
>: that the entity had several options available to its consciousness, among 
>: which it selected using some criterion.
>
>So answer this:  If there is an individual entity which at any
>given time t will "choose" behavior b from a set of possible
>behaviors B by consistently applying a straightforward mathematical
>calculation of subjective estimated utility on each of the
>behaviors in the set and always "choosing" the behavior with the
>greatest evaluated utility ... then it has free will?
>
>Some people have proposed that this is what humans do.  We will,
>at any given point in time, have a set of behavioral options open
>to us.  We choose which behavior to carry out by estimating the
>"utility" or benefit we can expect from each one, and chosing that
>behavior which will benefit us the most.
>
This may be so, but it is still a problem what you count as criteria for this 
"utility" or "benefit". The criteria may be dependent on the present state 
of the mind (or brain).

>But this kind of thing can easily be implemented in, say, a robot.
>It can, at any given point in time, consider all of the possible
>motions of all of its limbs, calculate based on some function the
>probability that that action will bring it closer to its goal, and
>carry out the action with the highest utility.  This robot satisfies
>your above definition of "free will," but can NOT do otherwise when
>presented with a "similar enough" situation in the future, because it
>is choosing based on a deterministic algorithm.
>
Since the physical state of the brain changes constantly, also under 
the influence of decisions taken and observation of results, there is no
guarantee that if the situation is "similar enough", a similar decision will
be taken. The current state of the mind/brain is also a factor in decisions
and the mind/brain probably never comes back to a state it has ever been in.
If at a certain junction in life you took a decision "a" instead of "b", 
you have done this because of some internal reasons. Saying that you could
have chosen "b" is nonsense - you could only have choosen "b" if you were
a different person. Try to think what determines what makes you "you".
Isn't the fact that in situation X you choose "a" and not "b" or "c", in 
situation Y you choose "a1" and not "b1" or "c1" etc?
If, for example, a hungry stray dog comes to my door, I can give it food or
I can kick it so that it goes away. Do _I_ really have a choice? I would not
be the person I am if I kicked it (or otherwise), I would be some other person.
If a person you think you know does something totally unexpected, "out of
character", do you think that he/she exercised his/her 'free will' or rather
that he/she is not the person you thought him/her to be?

>--
>Greg Stevens       "We're talking about a society in which there are to be
>gregs@umich.edu     no roles, except for those chosen and those earned."
>                                          --sampled by Sequential

Andrzej
-- 
Andrzej Pindor                        The foolish reject what they see and 
University of Toronto                 not what they think; the wise reject
Information Commons                   what they think and not what they see.
pindor@breeze.hprc.utoronto.ca                      Huang Po
