From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!spool.mu.edu!news.cs.indiana.edu!arizona.edu!NSMA.AriZonA.EdU!bill Thu Dec 26 23:57:57 EST 1991
Article 2350 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:2350 sci.philosophy.tech:1562
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!think.com!spool.mu.edu!news.cs.indiana.edu!arizona.edu!NSMA.AriZonA.EdU!bill
>From: bill@NSMA.AriZonA.EdU (Bill Skaggs)
Newsgroups: comp.ai.philosophy,sci.philosophy.tech
Subject: Re: red light / blue light scenario
Message-ID: <1991Dec21.104936.2301@arizona.edu>
Date: 21 Dec 91 17:49:35 GMT
Article-I.D.: arizona.1991Dec21.104936.2301
References: <1991Dec20.004238.11206@smsc.sony.com> <1991Dec19.222126.2296@arizona.edu> <1991Dec20.202630.14526@smsc.sony.com>
Reply-To: bill@NSMA.AriZonA.EdU (Bill Skaggs)
Distribution: world,local
Organization: Center for Neural Systems, Memory, and Aging
Lines: 50
Nntp-Posting-Host: cortex.nsma.arizona.edu


>  Tomorrow Mr. Skaggs is going to be the subject of an actual
>  experiment which may involve creating a duplicate of him.
>  Today he is given the proverbial two choices, allowing him
>  to determine which of these two experiments will be performed
>  tomorrow:
>
>  1)  Tomorrow he'll be duplicated.  The duplicate will have a Very
>      Horrible Thing happen to him.  The original Mr. Skaggs will have
>      a Very Nice Thing happened to him.
>
>  2)  Tomorrow no duplication will take place.  The one and only
>      Mr. Skaggs will face a lottery.  There is a 1 in 10 chance
>      that a Very Horrible Thing will happen to him, and a 9 in 10
>      chance that a Very Nice Thing will happen to him.
>
>  I really hate to put you through this; I hope you understand that
>  it's all in the name of philosophical investigation :-)
>
>Tomorrow semantic problems with the word "you" will abound, but today
>there is one and only one Mr. Skaggs, and that's *you*, Mr. Skaggs.
>So I'm asking *you*, today, what choice will you make today, and why?

It's clear to me what decision I would make, but it's not so easy
to explain why I would make it, and I'm not really sure it's all
that significant.

I would choose option (1).  Before I explain why, let me note that
by the evolutionary criterion of maximizing my genetic fitness,
option (1) is clearly preferable:  it leaves two copies of me,
each of which is potentially capable of replicating, and one of
which has had a Very Good Thing happen to it.

Now of course people (me included) don't make decisions by 
calculating the effect upon their genetic fitness --- they do
it by visualizing each option and imagining how it would
*feel*.  I can easily do this for option (2), but I can't
for option (1).  When I try to visualize option (1), I find
that my tendency is to identify myself with the copy that has
the Very Good Thing happen to it, and ignore what happens to
the other copy --- myabe because I don't like to visualize
Very Horrible Things happening to me.  Since the copy I
identify with myself is certain to have something good happen
to it, I prefer this option.

I'm the first to admit that this decision is not based on
deep philosophical grounds, so I would be quite skeptical of
any attempt to draw deep philosophical conclusions from it.

	-- Bill


