From newshub.ccs.yorku.ca!torn!utgpu!pindor Thu Oct  8 10:10:52 EDT 1992
Article 7081 of comp.ai.philosophy:
Newsgroups: comp.ai.philosophy
Path: newshub.ccs.yorku.ca!torn!utgpu!pindor
>From: pindor@gpu.utcs.utoronto.ca (Andrzej Pindor)
Subject: Re: Freewill, chaos and digital systems
Message-ID: <BvGCLC.GHs@gpu.utcs.utoronto.ca>
Organization: UTCS Public Access
References: <1992Sep15.215156.29721@mp.cs.niu.edu> <7598@skye.ed.ac.uk> <1992Sep29.204929.421@mp.cs.niu.edu> <7614@skye.ed.ac.uk>
Date: Thu, 1 Oct 1992 17:00:47 GMT

In article <7614@skye.ed.ac.uk> jeff@aiai.ed.ac.uk (Jeff Dalton) writes:
...................
>
>I can pretty much agree with all of that.  However, here is a thought
>experiment.  Robots have been developed that can pass the Turing Test.
>Indeed, we've decided that they can be considered persons and we've
>even given them voting rights.  The True Blue Robot Company starts
>making robots.  Now suppose it turns out that these robots tend to
>vote for the Conservative Party (which has blue as its colour).
>
>These robots might think all ther things we think about how the things
>that seem to influence our decisions are what really does influence
>our decisions, and so forth.  They can read Dennett and know about
>verieties of free will worth wanting.  Nonetheless, it seems to me
>that there is still a real question about whether their voting choices
>are "free".  We have suspicions, and rightly so.  Maybe the bias
>is due to nothing more than the colour blue.  (The robots know they're
>True Blue robots and, in most of them, this helps lead them to think
>favorably of the colout blue.)  But maybe not.
>
>-- jd

There is a fascinating story by Stanislaus Lem "The Mask" illustrating very 
well what you say .

 Andrzej Pindor

-- 
Andrzej Pindor
University of Toronto
Computing Services
pindor@gpu.utcs.utoronto.ca


