Newsgroups: comp.robotics
Path: brunix!sgiblab!spool.mu.edu!howland.reston.ans.net!xlink.net!scsing.switch.ch!swidir.switch.ch!univ-lyon1.fr!news.imag.fr!isis.imag.fr!cosmos!reignier
From: reignier@imag.imag.fr (patrick reignier)
Subject: Difference between (ASE/ACE) and Q-learning
Message-ID: <CKnHp7.LHy@imag.fr>
Keywords: Learning
Sender: news@imag.fr (Administration des news)
Nntp-Posting-Host: orion
Organization: Institut Imag, Grenoble, France
Date: Thu, 3 Feb 1994 13:34:19 GMT
Lines: 25

Hi,

I am a PhD student working in robotics.
I am interested in the field of reactive navigation.
I am currently considering some machine learning approaches and more 
precisly :
- reinforcement learning with associative search element and associative
critic element, as proposed by Sutton.
- Q learning as proposed by Watkins.

It seems to me that the difference between those two approaches is that
Q learning is embedding the "ASE" and the "ACE" proposed by Sutton in
one function.   
- Am I wrong ?
- Is there some fundamental differences ?
- When, on which criteria,  one of this two methods should be chosen instead
  of the other one ?

Thank you very much for help with these questions

-- 
Patrick Reignier        
LIFIA - INPG              A train station is a place where trains stop. 
Grenoble France.        Now, I better understand the WorkStation concept.   
email : Patrick.Reignier@imag.fr
