Newsgroups: comp.ai.games
Path: cantaloupe.srv.cs.cmu.edu!bb3.andrew.cmu.edu!nntp.sei.cmu.edu!cis.ohio-state.edu!math.ohio-state.edu!usc!chi-news.cic.net!newsfeed.internetmci.com!vixen.cso.uiuc.edu!news.ecn.bgu.edu!siemens!darken
From: darken@scr.siemens.com (Christian Darken)
Subject: Re: REQ: reference for Axelrod's work on prisoner's dilemma?
Message-ID: <DJHB82.8tC@scr.siemens.com>
Sender: news@scr.siemens.com (NeTnEwS)
Nntp-Posting-Host: avocet.scr.siemens.com
Organization: Siemens Corporate Research, Princeton NJ
Date: Tue, 12 Dec 1995 15:04:49 GMT
Lines: 27


>Some time ago, i read in a scientific american that axelrod had
>shown that a variant of a "cooperate" strategy was superior to
>various greedy algorithms.  does anyone have a reference to that
>work, or indeed work that has succeeded this?


The Scientific American article: "The Arithmetics of Mutual Help",
June 1995, by M. Nowak, R. May, and K. Sigmund.

Which was based on: "Tit-for-Tat in Heterogeneous Populations",
M. Nowak and K. Sigmund, _Nature_, Vol. 355, No. 6357, pp. 250-253,
Jan. 16, 1992.

And also: "A Strategy of Win-Stay, Lose-Shift that Outperforms
Tit-for-Tat in the Prisoner's Dilemma Game", M. Nowak and K. Sigmund,
_Nature_, Vol. 364, No. 6432, pp. 56-58, July 1, 1993.

Axelrod's book: _The Evolution of Cooperation_, Basic Books, 1984
(I've also seen it cited as 1985)

A recent follow-on (of sorts)---has useful references as well:
"Multiagent Reinforcement Learning in the Iterated Prisoner's
Dilemma", Tuomas W.  Sandholm and Robert H. Crites, {sandholm,
crites}@cs.umass.edu, University of Massachusetts at Amherst Computer
Science Dept. Technical Report.

