15-859(A) Projects and Presentations
One of the course requirements is to do a project or presentation.
Here is some more information on what is expected, and some ideas to
get you started.
- These should be done in groups of 2. You will read one or more
papers and produce two things: a class presentation, and a 5-page
class handout on the topic. You can have one person do the
presentation and one person do the handout, or split both
50/50. The topic should be chosen in consultation with the
- These can be done individually or in
groups of 2. A project might involve conducting an experiment or
thinking about a theoretical problem, or trying to relate two problems.
The end result should be a 5-10 page report, and if you wish, a 5-10
Presentation Ideas Here are a few ideas for
possible presentations. You might also
want to take a look at recent COLT proceedings.
- Algorithms for learning DNF formulas in various models.
- Learning functions of r relevant variables. See E. Mossel and
R. O'Donnell and R. Servedio. Learning Juntas. 35th
Annual Symposium on Theory of Computing (STOC), 2003, pp. 206-212.
- Online learning algorithms:
- A. Kalai and S. Vempala.
algorithms for online decision problems, COLT 2003.
This paper gives a simple algorithm for the experts problem
that generalizes to a broad class of settings such as online shortest
paths (each day you need to drive to work, but traffic patterns are
totally different; your goal is to do nearly as well as the best fixed
path in hindsight. The issue here is that a graph G might have
exponentially many paths so you don't want to represent each path
explicitly as a different "expert"). Basically, they show the
following simple approach works: choose the path that did best in
hindsight, but where you "hallucinate" an additional day
0 with a random traffic pattern.
- Extension of experts setting to specialists/sleeping-experts. Extensions to "internal regret". Extension to "bandit" setting. "Apple-tasting" model. (Ask me for papers).
- On-line learning and Bregman divergences. The top couple entries
on this page
are some tutorials by Manfred Warmuth.
- Boosting: (See Rob Schapire's
page and especially this
overview for more)
- Learning in Markov Decision Processes. See Michael Kearns's home
page and Yishay Mansour's home page for a number of good papers.
- Support vector machines, kernel-based algorithms.
Margins and luckiness functions. Random projection
- PAC-Bayes bounds, shell-bounds, other methods of obtaining
- Papers on clustering
- Learning in Graphical Models (Bayes Nets)
- You might do a project based on one of the above papers/topics or
a paper/topic discussed in class. E.g., This could involve
implementing an algorithm and trying it out on a learning problem,
playing with some of the design issues involved. Or you could try
improving/simplifying some of the theoretical results. Or
extending/applying them to a different setting.
- You could think of a new learning problem or theoretical model
that gets at some issue not addressed by the standard problems and models
discussed in class. See if you can analyze the model or solve the
- Try your hand at one of the open problems discussed in class.
- Or, suggest something...
Last modified: Tue Mar 02 09:14:40 Eastern Standard Time 2004