Newsgroups: comp.ai.alife,comp.ai.genetic
Path: cantaloupe.srv.cs.cmu.edu!rochester!udel!news.mathworks.com!uunet!psinntp!vivitech.com!bpolant
From: bpolant@vivitech.com (Bradley Polant - Software Engineering)
Subject: Re: Music Generation
Message-ID: <1995Apr12.155040.15859@vivitech.com>
Organization: Vivid Technologies Inc.
References: <q-WhvA+KBh107h@norml.ak.planet.co.nz>
Date: Wed, 12 Apr 1995 15:50:40 GMT
Lines: 21
Xref: glinda.oz.cs.cmu.edu comp.ai.alife:3050 comp.ai.genetic:5603

In article <q-WhvA+KBh107h@norml.ak.planet.co.nz> worik@norml.ak.planet.co.nz (Worik Stanton) writes:
>I am interested if anybody is doing any work in music generation
>/recognition using genetic alg. or A-life.
>
>I saw some work done here in New Zealand using Markov chains that 
>endevoured to reproduce improvised blues guitar. They had some success,
> but it seems to me that G.A. or A-Life approaches would be more effective.
   Well to me the problem with a GA aplication (and I am not saying
it can't be done, but I am only identifing the meaty-bits) is the fitness
function.  Do you (a) do comparisons with existing works, and perhaps have 
distributed fitnesses(in that case making each small part sound like a lot
of diffirent parts) or perhaps you have human testers hear each offspring
and assign fitness (time consuming as hell).
  Any ideas.
BP
>
>WORIK STANTON
>
>w.stanton@auckland.ac.nz


