Computer Science 5th Year Masters Thesis Presentation

  • Gates Hillman Centers
  • Traffic21 Classroom 6501
  • YAO CHONG LIM
  • Masters Student
  • Computer Science Department
  • Carnegie Mellon University
Master's Thesis Presentation

Neural Generative Modeling from Incomplete Data

Real-world machine learning systems must handle missing data well in order to build robust and reliable systems. One way to tackle this problem is to first impute the missing values. To achieve this, this thesis introduces a deep generative model, the variational auto-decoder (VAD), a variant of the stochastic gradient variational Bayes (SGVB) estimator first introduced by Kingma and Welling in 2013. The VAD framework directly optimizes the parameters of the approximate latent posterior during training and testing, compared to the common variational auto-encoder (VAE) implementation of SGVB, which approximates the latent posterior using an encoder network that is trained.

Through empirical evaluation on a wide range of datasets, and qualitative analysis of results in inpainting applications, we show that the encoder-based
posterior approximation employed by VAEs degrades as the rate of missing information increases, while the VAD framework is more robust to the
presence of missing data, and can be used as a general imputation method across domains.

Thesis Committee:
Louis-Philippe Morency
Alex Hauptmann

Addtional Thesis Information

For More Information, Please Contact: 
Keywords: