Lecture 24: Integrative Paradigms of GM: Regularized Bayesian Methods

Regularized Bayesian Methods and some applications.

Learning GMs

There are different frameworks to learn GMs: First, Bayesian framework. It allows priors to be introduced. Both parametric and nonparametric Bayesian tricks can be applied to learning these models. Second, max-margin framework. SVM is an example of this framework(it can be used to learn not only a classifier but also a graphical model). Third, there are also the kernel methods, of which SVM is also an example. Gaussian processes, another nonparametric bayesian paradigm, is another example of application of kernel methods.

These frameworks has complementary advantages of one another. For example, in the Bayesian framework, we have prior knowledges, we can bypass model selection; in SVM, outliers do not affect the results, etc. It is possible to use the ideas of these different frameworks and enjoy the advantages of all in one single paradigm. Potentially these ideas could be used to further empower the already powerful deep-learning models, which would be an interesting new topic to explore in the future.

Bayesian inference

We are already familiar with the Bayes’ rule: where $\mathcal{M}$ is a model from some hypothesis space, $x$ is observed data. The Bayesian framework allows you to derive a posterior distribution of the model. The prior distribution, i.e. the $\pi(\mathcal{M})$ part of the model needs to be provided, usually selected by the needs, while the $p(\mathbf{x}\vert\mathcal{M})$ part of the model need to be designed and could be the graphical part.

In parametric Bayesian inference, $\mathcal{M}$ is represented as a finite set of parameters $\theta$.

  1. a parametric likelihood: $\mathbf{x} \sim p(\cdot\vert\theta)$
  2. Prior on $\theta$: $\pi(\theta)$
  3. Posterior: $p(\theta \vert \mathbf{x}) = \frac{p(\mathbf{x}\vert\theta)\pi(\theta)}{\int p(\mathbf{x}\vert\theta)\pi(\theta)d{\theta}} \propto p(\mathbf{x}\vert\theta)\pi(\theta)​$

You don’t make too much flexibility in the choice of the model itself. You can make a choice of a Gaussian model or a Dirichlet distribution, the flexibility is in the way how it is parameterized. Define a prior distribution of the parameters, and you’ll get a posterior distribution of the parameters.

In nonparametric Bayesian inference, $\mathcal{M}$ is a richer model.

  1. Nonparametric likelihood: $\mathbf{x} \sim p(\cdot\vert\mathcal{M})$
  2. Prior on $\mathcal{M}$: $\pi(\mathcal{M})$
  3. Posterior: $ p(\mathcal{M}\vert x) = \frac{p(x\vert\mathcal{M})\pi(\mathcal{M})}{\int p(x\vert\mathcal{M})\pi(\mathcal{M})d\mathcal{M}} \propto p(x\vert\mathcal{M})\pi(\mathcal{M})$

The model itself becomes a space for you to make inference on. For example you have an unknown number of components in a mixed model, or an unknown number of dimensions in the latent feature models. Popular nonparametric Bayesian models include Dirichlet Process, Indian Buffet process and Gaussian process. These models are more powerful than the parametric Bayesian models. Nonparametric Bayesian models allow us to pay more attention to the power of data, and the interplay between the data and prior is more natural. It allows us to bypass the model selection problem and let the data itself determine model complexity.

There is a new, different expression of Bayes’ rule (Zellner, Am. Stat. 1988):

$\mathcal{P}_{\text{prob}}​$ is a direct but trivial constraint on the posterior distribution. This variation of expression of Bayes’ rule turns the problem into a optimization problem. It also gives space to inference algorithms, or even to augment the model. This new expression can be used to steer the Bayes inference into some interesting directions.

With this expression we can play some tricks in redefining the space of the posterior distribution. The trivial constraint $\mathcal{P}_{\text{prob}}​$ that guarantees whatever distributions are allowed can also be tightened to ensure that only a subset of distributions are allowed. The subset can be defined due to constraint from data. For example: where, e.x.,

and

For every data points you can constrain the distribution to satisfy the margin, and that gives you a new set of posterior distributions.

MLE versus Max-margin learning

Now let’s put MLE and Max-margin learning side by side for comparison. For likelihood-based classification, a typical example is logistic regression. And for Max-margin learning, the example would be SVM.

In the classical predictive models the input and output spaces are We are learning where $\ell(\cdot)​$ represents a convex loss, and $R(\mathbf{w})​$ is a regularizer to prevent overfitting.

In logistic regression, the maximum likelihood estimation is i.e. you are maximizing the likelihood of the label given the data points. A regularizer $\mathcal{N}(\mathbf{w})​$ could be introduced. This is sometimes called a shrinkage function.

It corresponds to a log loss with L2R:

SVM is formulated very differently:

It corresponds to a hinge loss with L2R:

The two learning paradigms have complementary advantages. Likelihood-based models are probablistic, therefore by introducing a prior distribution, Bayesian learning can be easily performed. Besides, probablistic models allow for introduction of latent variables so that you can have enable latent space models. SVM, on the other hand, does not allow for hidden variables because it is non-probablistic. But the advantages are that the support vector property gives good guarantee on generizability, and that it allows for kernel tricks.

The Maximum Entropy Discrimination (MED) is an approach to combine the logistic regression and SVM. Model averaging The optimization problem (binary classification) is:

where $\Theta$ is the parameter $\mathbf{w}$ when $\xi$ are kept fixed or the pair $\mathbf{w},\xi$ when we want to optimize over $\xi$. This is a mechanical combination of the two approaches because the margin idea is used to define constraints to the posterior, and the likelihood idea to define the loss.

Structured Prediction Graphical Models

Conditional Random Fields (CRF) is based on a logistic loss. It could be seen as a structured version of logistic regression with the input and output spaces .

The max-likelihood estimation (point-estimate) is:

Max-Margin Markov Networks are based on a hinge loss. It is a structured version of SVM with the input and output spaces .

The max-margin learning (point-estimate):

We have seen some examples of how models can be generalized, now we will see how to combine them. If we change the boundary of SVM to a family of boundaries, we get MED; if we make SVM structured and make multi-labeled, coupled prediction, M$^3​$N is what we get. Naturally, the next thing we can do is to combine the two and can have a distribution of the structured SVM as well. And thus we get the maximum entropy discrimination Markov networks (MED-MN).

Maximum Entropy Discrimination Markov Networks

Structured MaxEnt Discrimination (SMED):

\begin{aligned} \min _{p(w), \xi} \operatorname{KL}(p(w) \| p_0(w))+U(\xi)\\ \text { s.t. } : p(w) \in \mathcal{F}_1,\xi\geq 0,\forall i \end{aligned}

This is also know as generalized maximum entropy or regularized KL-divergence.

Feasible subspace of weight distribution:

\begin{aligned} \mathcal{F}_1 = \{p(w):\int p(w)[\Delta F_i(y;w)-\Delta l_i(y)]dw\geq -\xi_i,\forall i, \forall y \neq y^i\} \end{aligned}

Average from distribution of $M^3Ns$:

\begin{aligned} h_1(x;p(w)) = \arg \max_{y\in \mathcal{Y}(x)}\int p(w)F(x,y;w)dw \end{aligned}

We are going to use a loss function which consists of the KL divergence between the posterior distribution and the prior distribution, plus a slack function to capture the margin error like in SVM. The posterior distribution is constrained to the function space known as predictive margin for every data point. This optimization procedure also includes a predictive rule, which uses the model averaging idea that integrates over all the values of the weights and gets an ensemble prediction.

This formulation is different from pure maximum margin learning or pure maximum likelihood learning. KL divergence is a mapping the distances between distributions. And we are doing distribution learning in a constrained space, which is represented by the colored areas. Essentially our final goal is to project the constrained space into the KL divergence.

Solution to MaxEnDNet

Posterior Distribution:

Dual Optimization Problem:

\begin{aligned} D1: \max_\alpha -\log Z(\alpha) -U^*(\alpha)\\ \text { s.t. } : \alpha_i(y)\geq 0,\forall i, \forall y,\\ \end{aligned}

The derived posterior distribution is similar to a Bayes law. The key idea behind the SVM derivation is that there is a primal dual conversion in the optimization where you are turning the problem of solving for the decision boundary weight into a problem where you solve for alphas. The index of the decision boundary weight spans over the dimension, while the index the alphas spans over the number of data. Due to the complementary slackness constraint, we know alpha is zero for most data points because they are not on the decision boundaries.

This is very interesting as we are essentially doing Bayesian inference which incorporates the prior. At the same time we also achieve the support vector effect by estimating the dual parameters, which only depend on a few data points in the training set. This is one fo the key advantages of SVM. The solution to MaxEnDNet is able to merit these two things together. However, solving the MaxEnDNet problem is not necessarily a easy problem depending on the prior. If the prior is Gaussian, then the whole formulation is reduced to a structural SVM.

Algorithmic issues of solving $M^3Ns$:

Formulation of the primal problem:

\begin{aligned} P0(M^3N):\min \frac{1}{2}||w||^2&+C\sum_{i=1}^N\xi_i\\ \text { s.t. }\forall i,\forall y\neq y^i:w^T\Delta f_i(y)&\geq \Delta l_i(y)-\xi_i,\\ \xi_i\geq 0 \end{aligned}

Algorithms used to solve the primal problem: Cutting Plane, Sub Gradient, etc.

Formulation of the dual problem:

\begin{aligned} D0(M^3N): \max_\alpha \sum_{i,y}\alpha_i(y)\Delta l_i(y) &-\frac{1}{2}\eta^T\eta\\ \text{ s.t. }\forall i,\forall y: \sum_{y}\alpha_i(y)&=C;\alpha_i(y)\geq 0,\\ \text{where } \eta &= \sum_{i,y}\alpha_i(y)\Delta f_i(y) \end{aligned}

Algorithms used to solve the dual problem: SMO, Exponentiated Gradient etc.

Variational Learning of LapMEDN

Exact primal or dual function is hard to optimize.

Primal form:

With the following constraint:

Dual form:

\begin{aligned} \max_\alpha \sum_{i,y}\alpha_i(y)\Delta l_i(y) - \sum_{k=1}^K\log \frac{\lambda}{\lambda - \eta_k^2}\\ \text{ s.t. } \sum_{y}\alpha_i(y) = C;\alpha_i(y)\geq 0, \forall i, \forall y \end{aligned}

Instead, we can use the hierarchical representation of Laplace prior and get the following:

\begin{aligned} KL(p||p_0) \leq -H(p)-<\int q(\tau)\log \frac{p(w|\tau)p(\tau|\lambda)}{q(\tau)}d\tau>_p =L(p(w),q(\tau)) \end{aligned}

Then we can optimize this derived upper bound:

The advantages of MEDN

  1. An averaging Model: PAC-Bayesian prediction error guarantee. It gives practitioners insight about how to reduce the complexity and training costs.

  2. Entropy regularization: Introducing useful biases. The bayesian framework allows you to control the sparsity and behavior of the weight in a soft way. What is achieved by the PAC-based model is that it gives a range of constraints like the L1/L2 constaints, depending on how you adjust the hyper-parameters in the laplace prior in the MaxEnDNet. You will have a smooth middle ground among L1, L2 and other constraints.

  3. Integrating Generative and Discriminative principles

  4. Incorporate latent variables and structures. Latent variable is rarely explored formally in a SVM framework, but very often in a bayesian framework. Allowing latent variables in this bayesian SVM brings out interesting result.

Experimental results on OCR datasets

Regardless of the size of training data, the LapMEDN model consistently outperforms the other models.

Structural output example for ambiguous image
Compare LapMEDN with baselines under situations with different amount of data

Partially Observed MaxEnDNet (PoMEN)

PoMEN learning:

With the following specification:

And:

Prediction:

Above can be optimized using alternating minimization algorithm.

Step 1: keep $p(z)$ fixed, optimize over $p(w)$.

Step 2: keep $p(w)$ fixed, optimize over $p(z)$.

Predictive Latent Subspace Learning via a large-margin approach

There are some latent space models, such as Latent Dirichlet Allocation(LDA), Principal Component Analysis(PCA). One of major utility of latent space models is to get embeddings. Most of time, we want to use the embeddings to make better predictions. Finding latent subspace representations is an old topic, which means mapping a high-dimensional representation into a latent low-dimensional representation, where each dimension can have some interpretable meaning.

In the past, the whole process can be divided into two steps. Firstly, get the embedded features of every text. Secondly, use the embedded feature as augmented data to retrain a classifier. The classification error, which can be represented as a loss function, is different from the embedding loss function. But now, we try to coalesce the two steps, which can be called as predictive subspace learning with supervision.

Unsupervised latent subspace representations are generic but can be sub-optimal for predictions. Many datasets are available with supervised side information. They can be noisy, but not random noise. For example, labels and rating scores are usually assigned based on some intrinsic property of the data. So it is helpful to suppress noise and capture the most useful aspects of the data. The final goal is to discover latent subspace representations that are both predictive and interpretable by exploring weak supervision information.

LDA: Latent Dirichlet Allocation

As shown below, the model tries to get latent variables for every word. During the process, it needs to initialize the Dirichlet parameter, and get the latent topic vector for every documents. The generative procudure is

LDA Model

Generative Procedure:

  • For each document $d$:
    • Sample a topic proportion $\theta_{d} \sim Dir(\alpha)$
    • For each word:
      • Sample a topic $Z_{d,n} \sim Mult(\theta_{d})$
      • Sample a word $W_{d,n} \sim Mult(\beta_{z_{d,n}})$

The joint distribution is

But exact inference is intractable, and variational inference is

In this way, we can minimize the variational bound to estimate parameters and infer the posterior distribution.

Maximum Entropy Discrimination LDA(MedLDA)

Use the latent representations $Z_{d,n}$ to make a prediction of the label. So that the training of LDA becomes a supervised training. And the goal is to influence $\theta$ indirectly, and let embeddings to be more oriented, to be more discriminative.

Bayesian sLDA

MED estimation can be divided into MedLDA regression model and MedLDA classification model. In MedLDA models, they replace the likelihood function by Bayesians LDA to make it a LDA based loss function. And then plus the prediction penalty on the margin, and constrain these margins. This can be pretty flexible, since it has two types of LDA. For example, predicting the score of the service based on the latent variables of the Yelp comments can use MedLDA Regression Model, and when predicting the label, it can use MedLDA Classification Model. The objective function considers predictive accuracy and model fitting. So it augments the original LDA by adding some new components.

The objective function and constraints for MedLDA Regression Model is

The objective function and constraints for MedLDA Classification Model is

Comparision between LDA and supervised LDA

The embedding performance of original LDA and supervised LDA can be seen from Figure in the experiment of document modeling. The embeddings of original LDA are less discrimitive, which is shown that different colors overlap with each other. But embeddings from supervised LDA are more separatly. Therefore, MedLDA makes the classification problem easier.

Document Modeling
  1. Classification

In classification problems, the baseline is LDA + SVM, which has two separate steps. Models of sLDA and DiscLDA are probabilistic supervised topic models, and models of MedLDA and MedLDA + SVM are maximum margin based SVM models. And the models based on maximum margin principle perform best. The measurement is relative improvement ratio.

Classification Comparision
  1. Regression

The performance of regression is same as classification. The combination of likelihood and margin based procedure has the best performance. The measurement is predictive $R^{2}$ and per-word log-likelihood.

Regression Comparision
  1. Time Efficiency

The time efficiency of MedLDA is pretty good, which is shown in Figure . It is much faster than the pure probabilistic version. Since MedLDA has some tricks that can be introduced in the optimization algorithm for the SVM + LDA.

Time Efficiency Comparision

Infinite SVM

Another example invloves exploring the large margin idea in combination with non-parametric bayesian model for classfication and feature selection. In the field of classification problems, it’s common to use mixture of classifiers. Conceptually, mixture of SVMs can be regarded as a combination of SVMs with different weights, and prefered over logistic regression since they have kernel functions. Here, we are talking about making priors over mixture of SVMs, and use infinite SVMs here.

Given the general theoretical framework of RegBayes,

In the case of inifite SVM:

Inifinite SVM is the first attempt to integrate Bayesian nonparametric, large-margin learning and kernel methods. The SVMs are treated as density functions to define likelihood of the data. The detailed process is as following.

  1. DP mixture of large-margin classifiers. This is the process to determine which classifier to use.

  2. Given a component classifier:

  3. Overall discriminant function:

  4. Prediction rule:

  5. Learning problem:

With assumption and relaxation, we can make the question more easier by approximating the variational distribution:

And the optimization can be solved with coordinate descent. For , we solve an SVM learning problem, and for , we get the closed update rule:

Compared to infinite SVM, which is a bayesian nonparametric latent class model, infinite latent SVM is a Bayesian nonparametric latent feature/factor model, and each data point is mapped to a set of latent factors. The prior we use here is Indian Buffet process instead of a DP prior (used in infinite SVM). Nonparametric IBP prior allows the models to have an unbounded number of latent features. The regularized inference problem can be efficiently solved with an iterative procedure, which leverages existing high-performance convex optimization techniques.

The experiments showed increased performance on TRECVID2003 and Flickr image datasets.