Newsgroups: comp.ai.jair.announce
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!vixen.cso.uiuc.edu!uwm.edu!lll-winken.llnl.gov!ames!kronos.arc.nasa.gov!jair-ed
From: jair-ed@ptolemy.arc.nasa.gov
Subject: New Article, Truncating Temporal Differences...
Message-ID: <1995Jan19.202608.28456@ptolemy-ethernet.arc.nasa.gov>
Originator: jair-ed@polya.arc.nasa.gov
Lines: 56
Sender: usenet@ptolemy-ethernet.arc.nasa.gov (usenet@ptolemy.arc.nasa.gov)
Nntp-Posting-Host: polya.arc.nasa.gov
Organization: NASA/ARC Computational Sciences Division
Date: Thu, 19 Jan 1995 20:26:08 GMT
Approved: jair-ed@ptolemy.arc.nasa.gov

JAIR is pleased to announce the publication of the following article:

Cichosz, P. (1995)
  "Truncating Temporal Differences: On the Efficient Implementation of 
   TD(lambda) for Reinforcement Learning", Volume 2, pages 287-318.
   PostScript: volume2/cichosz95a.ps (313K)

   Abstract: Temporal difference (TD) methods constitute a class of
   methods for learning predictions in multi-step prediction problems,
   parameterized by a recency factor lambda. Currently the most important
   application of these methods is to temporal credit assignment in
   reinforcement learning. Well known reinforcement learning algorithms,
   such as AHC or Q-learning, may be viewed as instances of TD learning.
   This paper examines the issues of the efficient and general
   implementation of TD(lambda) for arbitrary lambda, for use with
   reinforcement learning algorithms optimizing the discounted sum of
   rewards. The traditional approach, based on eligibility traces, is
   argued to suffer from both inefficiency and lack of generality. The
   TTD (Truncated Temporal Differences) procedure is proposed as an
   alternative, that indeed only approximates TD(lambda), but requires
   very little computation per action and can be used with arbitrary
   function representation methods.  The idea from which it is derived is
   fairly simple and not new, but probably unexplored so far. Encouraging
   experimental results are presented, suggesting that using lambda>0
   with the TTD procedure allows one to obtain a significant learning
   speedup at essentially the same cost as usual TD(0) learning.

The PostScript file is available via:
   
 -- comp.ai.jair.papers

 -- World Wide Web: The URL for our World Wide Web server is
       http://www.cs.washington.edu/research/jair/home.html

 -- Anonymous FTP from either of the two sites below:
      CMU:   p.gp.cs.cmu.edu        directory: /usr/jair/pub/volume2
      Genoa: ftp.mrg.dist.unige.it  directory:  pub/jair/pub/volume2

 -- automated email. Send mail to jair@cs.cmu.edu or jair@ftp.mrg.dist.unige.it
    with the subject AUTORESPOND, and the body GET VOLUME2/CICHOSZ.PS
    (either upper or lowercase is fine). 
    Note: Your mailer might find this file too large to handle.

 -- JAIR Gopher server: At p.gp.cs.cmu.edu, port 70. 

For more information about JAIR, check out our WWW or FTP sites, or
send electronic mail to jair@cs.cmu.edu with the subject AUTORESPOND
and the message body HELP, or contact jair-ed@ptolemy.arc.nasa.gov.


