Machine Learning Thesis Proposal
- Gates Hillman Centers
- CALVIN MURDOCK
- Ph.D. Student
- Machine Learning Department
- Carnegie Mellon University
Data Decomposition for Constrained Visual Learning
With the increasing prevalence of large image datasets, machine learning has had a substantial influence on the field of computer vision. In place of specialized domain knowledge, many problems are now dominated by statistical models that learn from large collections of training examples. However, purely data-driven approaches can be ineffective due to high dimensionality, incomplete or noisy annotations, insufficient training variability, or ambiguities. To address these issues, we provide novel interpretations and extensions of data decomposition that better leverage the rich structure and prior constraints inherent to visual learning.
We first introduce a formulation posed as approximate constraint satisfaction, which can accommodate instance-level prior knowledge. We apply this framework in Semantic Component Analysis, a method for weakly-supervised semantic segmentation with constraints that encourage interpretability even in the absence of supervision. From its close relationship to standard component analysis, we also derive Additive Component Analysis for learning nonlinear manifold representations with roughness-penalized additive models.
Building upon theoretical connections between deep learning and multilayer sparse coding, we then propose Deep Component Analysis, a more expressive nonlinear model that enforces hierarchical structure through multiple layers of constrained latent variables. Inference is implemented using Alternating Direction Neural Networks, which are trained discriminatively with backpropagation. As recurrent generalizations of feed-forward neural networks, they improve capacity by replacing nonlinear activation functions with constraints enforced by feedback connections. This is demonstrated experimentally through applications to constrained single-image depth prediction. More broadly, the perspective of data decomposition enables new opportunities for understanding and designing deep learning architectures.
Simon Lucey (Chair)
James Hays (Georgia Institute of Technology)