|
Machine
Learning
10-601, Spring 2009Tom Mitchell Machine Learning Department, School of Computer Science, Carnegie-Mellon University |
Course Project
Your class project is an opportunity for you to explore an interesting machine learning problem of your choice in the context of a real-world data set. Projects can be done by you as an individual, or in teams of two students. Each project will also be assigned a 601 instructor as a project consultant/mentor. Instructors and TAs will consult with you on your ideas, but of course the final responsibility to define and execute an interesting piece of work is yours. Your project will be worth 20% of your final class grade, and will have 4 deliverables:
- Proposal:1 page (10%)
- Midway Report:3-4 pages (20%)
- Final Report: 6-8 pages (40%)
- Poster
Presentation:
(30%)
Note that all final write-ups in the form of a NIPS paper. The page limits are strict! Papers over the limit will not be considered.
Project Proposal
You must turn in a brief project proposal (1-page maximum). Read the list of available data sets and potential project ideas below. It is highly recommended that you use one of these data sets, because we know that they have been successfully used for machine learning in the past. If you have another data set you want to work on, you can discuss it with us. However, we will not allow projects on data that has not been collected by the time of your proposal submission, so if you would like to work on another data set you must show us that you already have this data in hand. It is also possible to propose a project on some theoretical aspects of machine learning. If you want to do this, please discuss it with us. Note that even though you can use data sets you have used before, you cannot use as class projects something that you started doing prior to the class.
Project proposal format: Proposals should be one page maximum. Include the following information:
- Project title
- Data set
- Project idea. This should be approximately two paragraphs.
- Software you will need to write.
- Papers to read. Include 1-3 relevant papers. You will probably want to read at least one of them before submitting your proposal
- Teammate: will you have a teammate? If so, whom? Maximum team size is two students. We expect projects done in a group to be more substantial than projects done individually (each team should submit just one proposal writeup).
- Midway report milestone: What will you complete by the time of the midway report? Experimental results of some kind are expected here.
Midway Report
This should be a 3-4 page short report. It serves as a check-point. It should consist of the same sections as your final report (introduction, related work, method, experiment, conclusion), with a few sections `under construction'. Specifically, the introduction and related work sections should be in their final form; the section on the proposed method should be almost finished; the sections on the experiments and conclusions will have whatever results you have obtained, as well as `place-holders' for the results you plan/hope to obtain.
Grading scheme for the midway report:
- 70% for proposed method and initial experimental results
- 25% for the design of upcoming experiments and extensions to the algorithm
- 5% for plan of activities (in an appendix, please show the old one and the revised one, along with the activities of each group member)
Final Report
Your final report is expected to be a 6 to 8 page report. You should submit both an electronic and a hardcopy version for your final report. It should roughly have the following format:
- Brief Introduction - Motivation
- Precise problem definition
- Proposed method
- Intuition - why should it be better than the state of the art?
- Description of its algorithms
- Experiments
- Description of your testbed; list of questions your experiments are designed to answer
- Details of the experiments; observations, and answers they provide to your questions
- Conclusions
Poster Presentation
We will have a public
poster
session : (Wednesday, April 29, 3-5pm). At least one project member should be present during the
poster
hours. The session will be open
to everybody. During this session the course instructors will hear your poster presentation.
Project Suggestions:
Ideally, you will want to pick a
problem in a
domain of your interest, e.g., natural language parsing, DNA sequence
analysis, text
information retrieval, network mining, reinforcement learning, sensor
networks, etc., and formulate
your problem using machine learning techniques. You can then, for
example, adapt
and tailor standard
inference/learning algorithms to your problem, and do a thorough
performance
analysis. You
can also find some project ideas below.
Project A1: Cognitive State Classification with Magnetoencephalography Data (MEG)
Data:
A zip file containing some example preprocessing of the data into features along with some text file descriptions: LanguageFiles.zipThe raw time data (12 GB) for two subjects (DP/RG_mats) and the FFT data (DP/RG_avgPSD) is located at:
/afs/cs.cmu.edu/project/theo-23/meg_pilot
You should access this directly through AFS space
This data set contains a time series of images of brain activation, measured using MEG. Human subjects viewed 60 different objects divided into 12 categories (tools, foods, animals, etc...). There are 8 presentations of each object, and each presentation lasts 3-4 seconds. Each second has hundreds of measurements from 300 sensors. The data is currently available for 2 different human subjects.
Project A: Building a cognitive state
classifier
Project idea: We would like to build classifiers to
distinguish between the different categories of objects (e.g. tools vs.
foods) or even the objects themselves if possible (e.g. bear vs. cat).
The exciting thing is that no one really knows how well this will work
(or if it's even possible). This is because the data was only gathered
a few weeks ago (Aug-Sept 08). One of the main challenges is figuring
out how to make good features
from the raw data. Should the raw data just be used? Or maybe it should
be first passed through a low-pass filter? Perhaps a FFT should convert
the time series to the frequency domain first? Should the features
represent absolute sensor values or should they represent changes from
some baseline? If so, what baseline? Another challenge is discovering
what features are useful for what tasks. For example, the features that
may
distinguish foods from animals may be different than those that
distinguish tools from buildings. What are
good ways to discover these features?
This project is more challenging and risky than the others because it
is not known what the results will be. But this is also good because no
one else knows either, meaning that a good result could lead to a
possible publication.
Papers to read:
Relevant but in the fMRI domain:
Learning to Decode Cognitive States from
Brain Images,
Mitchell et al., 2004,
Predicting Human Brain Activity Associated with the Meanings of Nouns,
Mitchell et al., 2008
MEG paper:
Predicting
the recognition of natural scenes from single trial MEG recordings of
brain activity, Rieger et al. 2008 (access from CMU domain)
Project A2: Brain imaging data (fMRI)
This data set contains a time series of images of brain activation, measured using fMRI, with one image every 500 msec. During this time, human subjects performed 40 trials of a sentence-picture comparison task (reading a sentence, observing a picture, and determining whether the sentence correctly described the picture). Each of the 40 trials lasts approximately 30 seconds. Each image contains approximately 5,000 voxels (3D pixels), across a large portion of the brain. Data is available for 12 different human subjects.
Available software: we can provide Matlab software for reading the data, manipulating and visualizing it, and for training some types of classifiers (Gassian Naive Bayes, SVM).
Project A: Bayes network classifiers for fMRI
Project idea: Gaussian Naive Bayes classifiers
and SVMs have been used with this data to predict when
the subject was reading a sentence versus perceiving a picture. Both of
these classify 8-second windows of data into these two classes,
achieving
around 85% classification accuracy [Mitchell et al, 2004]. This project
will
explore going beyond the Gaussian Naive Bayes classifier
(which assumes voxel activities are conditionally
independent), by training a Bayes network in
particular a TAN tree [Friedman, et al., 1997]. Issues you'll
need to confront include which features to include (5000 voxels
times 8 seconds of images is a lot of features) for classifier input,
whether to train brain-specific or brain-independent classifiers, and a
number
of issues about efficient computation with this fairly large data set.
Papers to read: "
Learning to Decode Cognitive States from Brain Images",
Mitchell et al., 2004, "
Bayesian Network Classifiers", Friedman et al., 1997.
Project B: Image Segmentation Dataset
The goal is to segment images in a meaningful way.
Berkeleycollected three hundred images and
paid students to hand-segment each one (usually each image has multiple
hand-segmentations).
Two-hundred of these images are training images, and the remaining 100
are test
images. The dataset includes code for reading the images and
ground-truth
labels, computing the benchmark scores, and some other utility
functions.
It also includes code for a segmentation example. This
dataset is
new and
the problem unsolved, so there is a chance that you could come up with
the
leading algorithm for your project.
http://www.cs.berkeley.edu/projects/vision/grouping/segbench/
Project ideas:
Project B: Region-Based Segmentation
Most segmentation algorithms have focused on segmentation based on
edges or
based on discontinuity of color and texture. The ground-truth
in
this
dataset, however, allows supervised learning algorithms to segment the
images
based on statistics calculated over regions. One way to do
this
is to
"oversegment" the image
into superpixels (Felzenszwalb
2004,
code available) and merge the superpixels
into larger
segments. Graphical models can be used to represent
smoothness in
clusters, by adding appropriate potentials between neighboring pixels.
In this project, you can address, for example, learning of such
potentials, and inference in models with very large tree-width.
Papers to read: Some segmentation papers from
Project C: Twenty Newgroups text data
This data set contains 1000 text articles posted to each of 20
online newgroups, for a
total of 20,000
articles. For
documentation and download, see this
website.
This data is useful for a variety of text classification and/or
clustering
projects. The "label" of each article is which of the 20
newsgroups it belongs to. The newsgroups (labels) are
hierarchically
organized (e.g., "sports", "hockey").
Available software: The same website provides an
implementation
of a
Naive Bayes classifier
for this text
data. The
code is quite robust, and some documentation is available, but it is
difficult
code to modify.
Project ideas:
EM text classification in the case where you
have labels for some documents, but not for others (see
McCallum
et al,
and come up with your own suggestions)
Project E: Character recognition (digits) data
Optical character recognition, and the simpler digit recognition task, has been the focus of much ML research. We have two datasets on this topic. The first tackles the more general OCR task, on a small vocabulary of words: (Note that the first letter of each word was removed, since these were capital letters that would make the task harder for you.)
http://ai.stanford.edu/~btaskar/ocr/
Project suggestion:
- Use an HMM to exploit correlations between neighboring letters in the general OCR case to improve accuracy. (Since ZIP codes don't have such constraints between neighboring digits, HMMs will probably not help in the digit case.)
Project F: NBA statistics data
This download contains 2004-2005 NBA and ABA stats for:
-Player regular season stats
-Player regular season career totals
-Player playoff stats
-Player playoff career totals
-Player all-star game stats
-Team regular season stats
-Complete draft history
-coaches_season.txt - nba coaching records by season
-coaches_career.txt - nba career coaching records
Currently all of the regular season
Project idea:
- outlier detection on the players; find out who are the outstanding players.
- predict the game outcome.
Project G: Precipitation data
This dataset has includes 45 years of daily precipitation data from the Northwest of the US:
http://www.jisao.washington.edu/data_sets/widmann/
Project ideas:
Weather prediction: Learn a probabilistic model to predict rain levels
Sensor selection: Where should you place sensor to best predict rain
Project H: WebKB
This dataset contains webpages from 4 universities, labeled with whether they are professor, student, project, or other pages.
http://www-2.cs.cmu.edu/~webkb/
Project ideas:
- Learning classifiers to predict the type of webpage from the text
- Can you improve accuracy by exploiting correlations between pages that point to each other using graphical models?
Papers:
Project I: Deduplication
The datasets provided below comprise of lists of records, and the goal is to identify, for any dataset, the set of records which refer to unique entities. This problem is known
by the varied names of Deduplication, Identity Uncertainty and Record Linkage.
http://www.cs.utexas.edu/users/ml/riddle/data.html
Project Ideas:
- One common approach is to cast the deduplication problem as a classification problem. Consider the set of record-pairs, and classify them as either "unique" or "not-unique".
Papers:
Project J: Email Annotation
The datasets provided below are sets of emails. The goal is to identify which parts of the email refer to a person name. This task is an example of the general problem area of Information Extraction.
http://www.cs.cmu.edu/~einat/datasets.html
Project Ideas:
- Model the task as a Sequential Labeling problem,
where each
email is a sequence of tokens, and each token can have either a label
of "person-name" or "not-a-person-name".
Papers: http://www.cs.cmu.edu/~einat/email.pdf
Project K: Netflix Prize Dataset
The Netflix Prize data set gives 100 million records of the form "user X rated movie Y a 4.0 on 2/12/05". The data is available here: Netflix Prize
Project idea:
-
Can you predict the rating a user will give on a movie from the movies that user has rated in the past, as well as the ratings similar users have given similar movies?
-
Can you discover clusters of similar movies or users?
-
Can you predict which users rated which movies in 2006? In other words, your task is to predict the probability that each pair was rated in 2006. Note that the actual rating is irrelevant, and we just want whether the movie was rated by that user sometime in 2006. The date in 2006 when the rating was given is also irrelevant. The test data can be found at this website.
Project L: Physiological Data Modeling (bodymedia)
Physiological data offers many challenges to the machine learning community including dealing with large amounts of data, sequential data, issues of sensor fusion, and a rich domain complete with noise, hidden variables, and significant effects of context.
1. Which sensors correspond to each column?
| characteristic1 | age |
| characteristic2 | handedness |
| sensor1 | gsr_low_average |
| sensor2 | heat_flux_high_average |
| sensor3 | near_body_temp_average |
| sensor4 | pedometer |
| sensor5 | skin_temp_average |
| sensor6 | longitudinal_accelerometer_SAD |
| sensor7 | longitudinal_accelerometer_average |
| sensor8 | transverse_accelerometer_SAD |
| sensor9 | transverse_accelerometer_average |
2. What are the activities behind each annotation?
The annotations for the contest were:
5102 = sleep
3104 = watching TV
Datasets can be downloaded from http://www.cs.utexas.edu/users/sherstov/pdmc/
Project idea:
- behavior classification; to classify the person based on the sensor measurements
Project M: Object Recognition
The Caltech 256 dataset
contains images
of 256 object categories taken at varying orientations, varying
lighting conditions, and with different backgrounds.
http://www.vision.caltech.edu/Image_Datasets/Caltech256/
Project ideas:
- You can try to create an object recognition system which can identify which object category is the best match for a given test image.
- Apply clustering to learn object categories without supervision
Project N: Learning POMDP structure so as to maximize utility
Hoey & Little (CVPR 04)
show how to
learn the state
space, and parameters, of a POMDP so as to maximize utility in a visual
face
gesture recognition task. (This is similar to the concept of "utile
distinctions" developed in Andrew
McCallum's PhD
thesis.) The goal of this project is to reproduce Hoey's
work in a simpler (non-visual) domain, such as McCallum's driving task.
Project O: Learning partially observed MRFs: the Langevin algorithm
In the recently proposed exponential
family
harmonium model (Welling
et. al., Xing
et. al.), a constructive divergence (CD) algorithm was used
to
learn the
parameters of the model (essentially a partially observed, two-layer
MRF). In
Xing et. al., a
comparison to variational
learning was performed. CD is essentially a gradient ascent algorithm
of which
the gradient is approximated by a few samples. The Langevin method
adds a random
perturbation to the gradient and can often help to get the learning
process out
of local optima. In this project you will implement the Langevin
learning algorithm for Xings dual wing harmonium model, and test your
algorithm
on the data in my UAI paper. See Zoubin
Ghahramanis paper
of Bayesian learning of MRF for reference.
Project P: Context-specific independence
We learned in class that CSI can speed-up inference. In this project, you can explore this further. For example, implement the recursive conditioning approach of Adnan Darwiche, and compare it to variable elimination and clique trees. When is recursive conditioning faster? Can you find practical BNs where the speed-up is considerable? Can you learn such BNs from data?Project Q: Enron E-mail Dataset
The Enron E-mail data set contains about 500,000 e-mails from about 150 users. The data set is available here: Enron Data
Project ideas:
-
Can you classify the text of an e-mail message to decide who sent it?
Project R: More data
There are many other datasets out there. UC Irvine has a repository that could be useful for you project:
http://www.ics.uci.edu/~mlearn/MLRepository.html
Sam Roweis also has a link to several datasets out there:
[validate xhtml]

