 My work has focussed on how the need to distinguish good actions from bad ones can direct the process of building a good representation of the environment in terms of <em>relevant</em>, or <em>important</em> features.  (See my note on  <a href="http://www.cs.wisc.edu/~finton/ibfe.html">importance-based feature extraction</a>).  Currently I am applying this notion of  <em>importance</em> to the problem of learning to balance the need to  explore the world with the need to perform optimally (exploration vs. exploitation). I am also investigating ways of using <em>importance</em> to make the learning process more efficient by allowing the system to specify the starting points for  its learning experiments (active learning).  My goal is to develop a better understanding of intelligent adaptation.  I hope that this will provide a basis for intelligent action which will also benefit from knowledge-based and task-based work.  See my (really  out-of-date, sorry!)  <a href="http://www.cs.wisc.edu/~finton/rlpage.html">reinforcement learning page</a> for more information.  <h2>
