From newshub.ccs.yorku.ca!torn!cs.utexas.edu!zaphod.mps.ohio-state.edu!wupost!waikato.ac.nz!aukuni.ac.nz!kcbbs!nacjack!codewks!system Sat Oct 24 20:44:27 EDT 1992
Article 7337 of comp.ai.philosophy:
Path: newshub.ccs.yorku.ca!torn!cs.utexas.edu!zaphod.mps.ohio-state.edu!wupost!waikato.ac.nz!aukuni.ac.nz!kcbbs!nacjack!codewks!system
Newsgroups: comp.ai.philosophy
Subject: Re: Freewill, chaos and digital systems
Message-ID: <4LVysB1w165w@CODEWKS.nacjack.gen.nz>
>From: system@CODEWKS.nacjack.gen.nz (Wayne McDougall)
Date: Tue, 20 Oct 92 12:40:26 NZDST
References: <7724@skye.ed.ac.uk>
Organization: The Code Works Limited, PO Box 10 155, Auckland, New Zealand
Lines: 66

> jd>                                            However, here is a thought
> jd> experiment.  Robots have been developed that can pass the Turing Test.
> jd> Indeed, we've decided that they can be considered persons and we've
> jd> even given them voting rights.  The True Blue Robot Company starts
> jd> making robots.  Now suppose it turns out that these robots tend to
> jd> vote for the Conservative Party (which has blue as its colour).
> 
> nr> Hmm.  These robots are sounding very much like my neighbors, except we
> nr> call it the Republican Party here.  If the robots uniformly voted
> nr> Conservative, we would be very suspicious. 
> 
> True.
> 
> nr>          But if there was a variation
> nr> of voting, with only a preponderance of Conservative votes, and if the
> nr> extent to which they voted Conservative were to change from election to
> nr> election, you might not be seeing anything much different from the
> nr> influences of educational background, religion, parental influence, etc,
> nr> on human voters.
> 
> Well, we have to assume that the conspirators at the True Blue Robot
> Company would take care not to be _too_ obvious.
> 
> It at least seems possible that they could produce robots that they
> _knew_ would behave in a certain way (either individually or statistically, 
> depending on whether they want to determine the decision or merely
> make it more probable), without it being immediately obvious from
> the robots' behavior (or to the robots themselves) that this was the case.
> 
> Now, these robots would presumably think they had free will just like
> we do.  Their decisions would feel free to them, they might say the
> same things we've been saying about how the things that seem to influence 
> our decisions are what really does influence our decisions, and so forth.
> Nonetheless, it seems to me that there is still a real question about
> whether their voting choices are free. 
> 
> In this case, we might be able to look at the records of the TBRC,
> interrogate the engineers and designers, check the program listings,
> and come up with some solid evidence.  We can't do literally the same
> for ourselves.  But can we even do anything like it?  Can we look
> at our programming, so to speak?  Are we fundamentally different from
> the robots?  If so, how?  If not, can we really say we have free will?
> And if so, what happens to our feeling that there was a real question
> about free will in the case of the robots?
> 
> -- jd


Hmmmm, but (to take the Conservative case), you could say like at the 
output of (say) Oxbridge - and golly gosh, there is a preponderence for 
voting Tory (;-)) whihc changes in level from election to election (the 
lecturers don't want to make it TOO obvious). So what ya going to do? 
Interrogate the lecturers and course planners, check the course 
material, and come up with some solid evidence.

In any event, no matter what educational and environmental factors you 
initiate a AI Turing Robot with, no rational system will stay a 
Rebublican B-D.

This sounds like the nurture versus nature argument.

-- 
  Wayne McDougall, BCNU
  This .sig unintentionally left blank.

Hello! I'm a .SIG Virus. Copy me and spread the fun.


