From newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!samsung!uunet!psinntp!norton!brian Fri Jan 31 10:27:31 EST 1992
Article 3318 of comp.ai.philosophy:
Xref: newshub.ccs.yorku.ca comp.ai.philosophy:3318 sci.philosophy.tech:1997
Path: newshub.ccs.yorku.ca!ists!helios.physics.utoronto.ca!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!samsung!uunet!psinntp!norton!brian
>From: brian@norton.com (Brian Yoder)
Newsgroups: comp.ai.philosophy,sci.philosophy.tech
Subject: Re: red light / blue light scenario
Message-ID: <1992Jan30.012944.5782@norton.com>
Date: 30 Jan 92 01:29:44 GMT
References: <1992Jan26.223233.28580@convex.com>
Organization: Symantec / Peter Norton
Lines: 12

Given that this whole discussion relies on a premise that is false (that 
people can be identically duplicated by fancy machines) what importance does 
whole line of thinking have?  What principle are you trying to derive or 
express?  We might possibly learn something about dealing with situations that
never arise, but who cares about learning about such things?


-- 
-- Brian K. Yoder (brian@norton.com) - Q: What do you get when you cross     --
-- Peter Norton Computing Group      -    Apple & IBM?                       --
-- Symantec Corporation              - A: IBM.                               --
--


