Newsgroups: comp.ai.neural-nets
Path: cantaloupe.srv.cs.cmu.edu!rochester!udel!gatech!howland.reston.ans.net!nntp.crl.com!decwrl!pa.dec.com!mrnews.mro.dec.com!janix.mfr.dec.com!stkai1.enet.dec.com!t_andersson
From: t_andersson@stkai1.enet.dec.com ()
Subject: Re: Othello
Message-ID: <1995Apr26.170213.21833@janix.mfr.dec.com>
Lines: 21
Sender: news@janix.mfr.dec.com (SDSC USENET News System)
Reply-To: t_andersson@stkai1.enet.dec.com ()
Organization: Digital Equipment GmbH Alpha Porting Center OSSC/SDSC
X-Newsreader: dxrn 6.18-32
References: <PETER.95Apr23123122@swamp.indigo.co.il>   <D7KBwA.9Ju@freenet.carleton.ca>
Date: Wed, 26 Apr 1995 17:02:13 GMT


In a previous posting, Peter Gordon (peter@swamp.indigo.co.il) writes:
|>> I would like to write Othello using a neural network.
|>> 
|>> Both backprop and recurrent backprop seem unsuitable. I want 
|>> the network to learn from complete game examples. I don't want 
|>> to impose external ideas of what is a 'good' position and what 
|>> is a 'bad' position. The only objective think you can say is who 
|>> is the winner when the end of the game is reached.
|>> 
|>> Does anyone have any ideas?

Why not create a population of neural networks and let them play
each other. Discard the loosers and use a genetic algorithm to
evolve the population using common genetic operators such as
mutation and crossover.

After a few generations, your champion should be a pretty good
player.

- Tomas Andersson -
