PhD student

Machine Learning Department

School of Computer Science

Carnegie Mellon University

Email: han (DOT) zhao [AT] cs (DOT) cmu (DOT) edu

Office: GHC 8021

[Curriculum Vitae] [Google Scholar] [Research Statement]

I am a final-year PhD student at the Machine Learning Department, Carnegie Mellon University. I am fortunate to work with my advisor, Prof. Geoff Gordon. Before coming to CMU, I obtained my BEng degree from the Computer Science Department at Tsinghua University and MMath from the University of Waterloo. I have a broad interest in both the theoretical and applied side of machine learning. In particular, I work on invariant representation learning, probabilistic reasoning with Sum-Product Networks, transfer and multitask learning, and computational social choice. |

I also co-organize the AI seminar @ CMU, feel free to drop me an email if you are interested to give a talk! |

Conditional Learning of Fair Representations
H. Zhao, A. Coston, T. Adel and G. Gordon
In Proceedings of the 8th International Conference on Learning Representations ( ICLR 2020, Spotlight)
NeurIPS 2019 Workshop on Machine Learning with Guarantees ( NeurIPS 2019)
[abs] [pdf] [video] [slides] [code] We propose a novel algorithm for learning fair representations that can simultaneously mitigate two notions of disparity among different demographic subgroups. Two key components underpinning the design of our algorithm are balanced error rate and conditional alignment of representations. We show how these two components contribute to ensuring accuracy parity and equalized false-positive and false-negative rates across groups without impacting demographic parity. Furthermore, we also demonstrate both in theory and on two real-world experiments that the proposed algorithm leads to a better utility-fairness trade-off on balanced datasets compared with existing algorithms on learning fair representations. |

Inherent Tradeoffs in Learning Fair Representations
H. Zhao and G. Gordon
In Proceedings of the 33rd Advances in Neural Information Processing Systems ( NeurIPS 2019)
[abs] [pdf] [poster] [slides] [blog] With the prevalence of machine learning in high-stakes applications, especially the ones regulated by anti-discrimination laws or societal norms, it is crucial to ensure that the predictive models do not propagate any existing bias or discrimination. Due to the ability of deep neural nets to learn rich representations, recent advances in algorithmic fairness have focused on learning fair representations with adversarial techniques to reduce bias in data while preserving utility simultaneously. In this paper, through the lens of information theory, we provide the first result that quantitatively characterizes the tradeoff between demographic parity and the joint utility across different population groups. Specifically, when the base rates differ between groups, we show that any method aiming to learn fair representations admits an information-theoretic lower bound on the joint error across these groups. To complement our negative results, we also prove that if the optimal decision functions across different groups are close, then learning fair representations leads to an alternative notion of fairness, known as the accuracy parity, which states that the error rates are close between groups. Finally, our theoretical findings are also confirmed empirically on real-world datasets. |

On Learning Invariant Representations for Domain Adaptation
H. Zhao, R. Tachet, K. Zhang and G. Gordon
In Proceedings of the 36th International Conference on Machine Learning ( ICML
2019, Long Oral)
[abs] [pdf] [supplement] [poster] [slides] [blog] Due to the ability of deep neural nets to learn rich representations, recent advances in unsupervised domain adaptation have focused on learning domain-invariant features that achieve a small error on the source domain. The hope is that the learnt representation, together with the hypothesis learnt from the source domain, can generalize to the target domain. In this paper, we first construct a simple counterexample showing that, contrary to common belief, the above conditions are not sufficient to guarantee successful domain adaptation. In particular, the counterexample (Fig. 1) exhibits \emph{conditional shift}: the class-conditional distributions of input features change between source and target domains. To give a sufficient condition for domain adaptation, we propose a natural and interpretable generalization upper bound that explicitly takes into account the aforementioned shift. Moreover, we shed new light on the problem by proving an information-theoretic lower bound on the joint error of \emph{any} domain adaptation method that attempts to learn invariant representations. Our result characterizes a fundamental tradeoff between learning invariant representations and achieving small joint error on both domains when the marginal label distributions differ from source to target. Finally, we conduct experiments on real-world datasets that corroborate our theoretical findings. We believe these insights are helpful in guiding the future design of domain adaptation and representation learning algorithms. |

Adversarial Multiple Source Domain Adaptation
H. Zhao*, S. Zhang*, G. Wu, J. Costeira, J. Moura and G. Gordon
In Proceedings of the 32nd Advances in Neural Information Processing Systems ( NeurIPS 2018)
[abs] [pdf] [supplement] [poster] [code] While domain adaptation has been actively researched, most algorithms focus on the single-source-single-target adaptation setting. In this paper we propose new generalization bounds and algorithms under both classification and regression settings for unsupervised multiple source domain adaptation. Our theoretical analysis naturally leads to an efficient learning strategy using adversarial neural networks: we show how to interpret it as learning feature representations that are invariant to the multiple domain shifts while still being discriminative for the learning task. To this end, we propose multisource domain adversarial networks (MDAN) that approach domain adaptation by optimizing task-adaptive generalization bounds. To demonstrate the effectiveness of MDAN, we conduct extensive experiments showing superior adaptation performance on both classification and regression problems: sentiment analysis, digit classification, and vehicle counting. |

A Unified Approach for Learning the Parameters of Sum-Product Networks
H. Zhao, P. Poupart and G. Gordon
In Proceedings of the 30th Advances in Neural Information Processing Systems ( NIPS 2016)
[abs] [pdf] [supplement] [poster] [code] We present a unified approach for learning the parameters of Sum-Product networks (SPNs). We prove that any complete and decomposable SPN is equivalent to a mixture of trees where each tree corresponds to a product of univariate distributions. Based on the mixture model perspective, we characterize the objective function when learning SPNs based on the maximum likelihood estimation (MLE) principle and show that the optimization problem can be formulated as a signomial program. Both the projected gradient descent (PGD) and the exponentiated gradient (EG) in this setting can be viewed as first order approximations of the signomial program after proper transformation of the objective function. Based on the signomial program formulation, we construct two parameter learning algorithms for SPNs by using sequential monomial approximations (SMA) and the concave-convex procedure (CCCP), respectively. The two proposed methods naturally admit multiplicative updates, hence effectively avoiding the projection operation. With the help of the unified framework, we also show that, in the case of SPNs, CCCP leads to the same algorithm as Expectation Maximization (EM) despite the fact that they are different in general. Extensive experiments on 20 data sets demonstrate the effectiveness and efficiency of the two proposed approaches for learning SPNs. We also show that the proposed methods can improve the performance of structure learning and yield state-of-the-art results. |

Collapsed Variational Inference for Sum-Product Networks
H. Zhao, T. Adel, G. Gordon and B. Amos
In Proceedings of the 33rd International Conference on Machine Learning ( ICML
2016)
[abs] [pdf] [poster] [slides] [code] Sum-Product Networks (SPNs) are probabilistic inference machines that admit exact inference in linear time in the size of the network. Existing parameter learning approaches for SPNs are largely based on the maximum likelihood principle and hence are subject to overfitting compared to more Bayesian approaches. Exact Bayesian posterior inference for SPNs is computationally intractable. Both standard variational inference and posterior sampling for SPNs are computationally infeasible even for networks of moderate size due to the large number of local latent variables per instance. In this work, we propose a novel deterministic collapsed variational inference algorithm for SPNs that is computationally efficient, easy to implement and at the same time allows us to incorporate prior information into the optimization formulation. Extensive experiments show a significant improvement in accuracy compared with a maximum likelihood based approach. |

On the Relationship between Sum-Product Networks and Bayesian Networks
H. Zhao, M. Melibari and P. Poupart
In Proceedings of the 32nd International Conference on Machine Learning ( ICML
2015)
[abs] [pdf] [supplement] [Full arXiv version] [slides] [poster] In this paper, we establish some theoretical connections between Sum-Product Networks (SPNs) and Bayesian Networks (BNs). We prove that every SPN can be converted into a BN in linear time and space in terms of the network size. The key insight is to use Algebraic Decision Diagrams (ADDs) to compactly represent the local conditional probability distributions at each node in the resulting BN by exploiting context-specific independence (CSI). The generated BN has a simple directed bipartite graphical structure. We show that by applying the Variable Elimination algorithm (VE) to the generated BN with ADD representations, we can recover the original SPN where the SPN can be viewed as a history record or caching of the VE inference process. To help state the proof clearly, we introduce the notion of {\em normal} SPN and present a theoretical analysis of the consistency and decomposability properties. We conclude the paper with some discussion of the implications of the proof and establish a connection between the depth of an SPN and a lower bound of the tree-width of its corresponding BN. |

Conditional Learning of Fair Representations
H. Zhao, A. Coston, T. Adel and G. Gordon
In Proceedings of the 8th International Conference on Learning Representations ( ICLR 2020, Spotlight)
NeurIPS 2019 Workshop on Machine Learning with Guarantees ( NeurIPS 2019)
[abs] [pdf] [video] [slides] [code] |

Inherent Tradeoffs in Learning Fair Representations
H. Zhao and G. Gordon
In Proceedings of the 33rd Advances in Neural Information Processing Systems ( NeurIPS 2019)
[abs] [pdf] [poster] [slides] [blog] |

On Learning Invariant Representations for Domain Adaptation
H. Zhao, R. Tachet, K. Zhang and G. Gordon
In Proceedings of the 36th International Conference on Machine Learning ( ICML
2019, Long Oral)
[abs] [pdf] [supplement] [poster] [slides] [blog] |

Adversarial Multiple Source Domain Adaptation
H. Zhao*, S. Zhang*, G. Wu, J. Costeira, J. Moura and G. Gordon
In Proceedings of the 32nd Advances in Neural Information Processing Systems ( NeurIPS 2018)
[abs] [pdf] [supplement] [poster] [code] |

Linear Time Computation of Moments in Sum-Product Networks
H. Zhao and G. Gordon
In Proceedings of the 31st Advances in Neural Information Processing Systems ( NIPS 2017)
[abs] [pdf] [poster] Bayesian online algorithms for Sum-Product Networks (SPNs) need to update their posterior distribution after seeing one single additional instance. To do so, they must compute moments of the model parameters under this distribution. The best existing method for computing such moments scales quadratically in the size of the SPN, although it scales linearly for trees. This unfortunate scaling makes Bayesian online algorithms prohibitively expensive, except for small or tree-structured SPNs. We propose a linear-time algorithm that works even when the SPN is a general directed acyclic graph (DAG). Our algorithm significantly broadens the applicability of Bayesian online algorithms for SPNs. There are three key ingredients in the design and analysis of our algorithm: 1). For each edge in the graph, we find a linear time reduction from the moment computation problem to a joint inference problem in SPNs. 2). Using the property that each SPN computes a multilinear polynomial, we construct an efficient procedure for polynomial evaluation by differentiation without expanding the network that may contain exponentially many positive monomials. 3). We propose a dynamic programming method to further reduce the computation of the moments of all the edges in the graph from quadratic to linear. We demonstrate the usefulness of our linear time moment computation algorithm by applying it to develop a linear time assume density filter (ADF) for SPNs. |

A Unified Approach for Learning the Parameters of Sum-Product Networks
H. Zhao, P. Poupart and G. Gordon
In Proceedings of the 30th Advances in Neural Information Processing Systems ( NIPS 2016)
[abs] [pdf] [supplement] [poster] [code] |

Online Algorithms for Sum-Product Networks with Continuous Variables
P. Jaini, A. Rashwan, H. Zhao, Y. Liu, E. Banijamali, Z. Chen and P. Poupart
In Proceedings of the 8th International Conference on Probabilistic Graphical Models ( PGM 2016)
[abs] [pdf] Sum-product networks (SPNs) have recently emerged as an attractive representation due to their dual interpretation as a special type of deep neural network with clear semantics and a tractable probabilistic graphical model. We explore online algorithms for parameter learning in SPNs with continuous variables. More specifically, we consider SPNs with Gaussian leaf distributions and show how to derive an online Bayesian moment matching algorithm to learn from streaming data. We compare the resulting generative models to stacked restricted Boltzmann machines and generative moment matching networks on real-world datasets. |

Collapsed Variational Inference for Sum-Product Networks
H. Zhao, T. Adel, G. Gordon and B. Amos
In Proceedings of the 33rd International Conference on Machine Learning ( ICML
2016)
[abs] [pdf] [poster] [slides] [code] |

Online and Distributed Bayesian Moment Matching for Parameter Learning in
Sum-Product Networks
A. Rashwan, H. Zhao and P. Poupart
In Proceedings of the 19th International Conference on Artificial Intelligence and Statistics ( AISTATS 2016)
[abs] [pdf] Probabilistic graphical models provide a general and flexible framework for reasoning about complex dependencies in noisy domains with many variables. Among the various types of probabilistic graphical models, sum-product networks (SPNs) have recently generated some interest because exact inference can always be done in linear time with respect to the size of the network. This is particularly attractive since it means that learning an SPN from data always yields a tractable model for inference. However, existing parameter learning algorithms for SPNs operate in batch mode and do not scale easily to large datasets. In this work, we explore online algorithms to ensure that parameter learning can also be done tractably with respect to the amount of data. More specifically, we propose a new Bayesian moment matching (BMM) algorithm that operates naturally in an online fashion and that can be easily distributed. We demonstrate the effectiveness and scalability of BMM in comparison to online extensions of gradient descent and expectation maximization on 20 classic benchmarks and 4 large scale datasets. |

On the Relationship between Sum-Product Networks and Bayesian Networks
H. Zhao, M. Melibari and P. Poupart
In Proceedings of the 32nd International Conference on Machine Learning ( ICML
2015)
[abs] [pdf] [supplement] [Full arXiv version] [slides] [poster] |

Continual Learning with Adaptive Weights (CLAW)
T. Adel, H. Zhao and R. E. Turner
In Proceedings of the 8th International Conference on Learning Representations ( ICLR 2020)
[abs] [pdf] [poster] Approaches to continual learning aim to successfully learn a set of related tasks that arrive in an online manner. Recently, several frameworks have been developed which enable deep learning to be deployed in this learning scenario. A key modelling decision is to what extent the architecture should be shared across tasks. On the one hand, separately modelling each task avoids catastrophic forgetting but it does not support transfer learning and leads to large models. On the other hand, rigidly specifying a shared component and a task-specific part enables task transfer and limits the model size, but it is vulnerable to catastrophic forgetting and restricts the form of task-transfer that can occur. Ideally, the network should adaptively identify which parts of the network to share in a data driven way. Here we introduce such an approach called Continual Learning with Adaptive Weights (CLAW), which is based on probabilistic modelling and variational inference. Experiments show that CLAW achieves state-of-the-art performance on six benchmarks in terms of overall continual learning performance, as measured by classification accuracy, and in terms of addressing catastrophic forgetting. |

Learning Neural Networks with Adaptive Regularization
H. Zhao*, Y. H. Tsai*, R. Salakhutdinov and G. Gordon
In Proceedings of the 33rd Advances in Neural Information Processing Systems ( NeurIPS 2019)
[abs] [pdf] [poster] [slides] [code] Feed-forward neural networks can be understood as a combination of an intermediate representation and a linear hypothesis. While most previous works aim to diversify the representations, we explore the complementary direction by performing an adaptive and data-dependent regularization motivated by the empirical Bayes method. Specifically, we propose to construct a matrix-variate normal prior (on weights) whose covariance matrix has a Kronecker product structure. This structure is designed to capture the correlations in neurons through backpropagation. Under the assumption of this Kronecker factorization, the prior encourages neurons to borrow statistical strength from one another. Hence, it leads to an adaptive and data-dependent regularization when training networks on small datasets. To optimize the model, we present an efficient block coordinate descent algorithm with analytical solutions. Empirically, we demonstrate that the proposed method helps networks converge to local optima with smaller stable ranks and spectral norms. These properties suggest better generalizations and we present empirical results to support this expectation. We also verify the effectiveness of the approach on multiclass classification and multitask regression problems with various network structures. |

Deep Generative and Discriminative Domain Adaptation
H. Zhao, J. Hu, Z. Zhu, A. Coates and G. Gordon
In Proceedings of the 18th International Conference on Autonomous Agents and Multiagent Systems ( AAMAS 2019)
[abs] [pdf] [poster] The ability to adapt to and learn from different domains and environments is crucial for agents to generalize. In this paper we propose a probabilistic framework for domain adaptation that blends both generative and discriminative modeling in a principled way. Under this framework, generative and discriminative models correspond to specific choices of the prior over parameters. By maximizing both the marginal and the conditional log-likelihoods, our models can use both labeled instances from the source domain as well as unlabeled instances from \emph{both} source and target domains. We show that the popular reconstruction loss of autoencoder corresponds to an upper bound of the negative marginal log-likelihoods of unlabeled instances, and give a generalization bound that explicitly incorporates it into the analysis. We instantiate our framework using neural networks, and build a concrete model, DAuto. |

Efficient Multitask Feature and Relationship Learning
H. Zhao, O. Stretcu, A. Smola and G. Gordon
In Proceedings of the 35th Conference on Uncertainty in Artificial Intelligence ( UAI 2019)
Also In Learning with Limited Labeled Data: Weak Supervision and Beyond workshop at NIPS 2017 [abs] [pdf] [supplement] [poster] We consider a multitask learning problem, in which several predictors are learned jointly. Prior research has shown that learning the relations between tasks, and between the input features, together with the predictor, can lead to better generalization and interpretability, which proved to be useful for applications in many domains. In this paper, we consider a formulation of multitask learning that learns the relationships both between tasks and between features, represented through a task covariance and a feature covariance matrix, respectively. First, we demonstrate that existing methods proposed for this problem present an issue that may lead to ill-posed optimization. We then propose an alternative formulation, as well as an efficient algorithm to optimize it. Using ideas from optimization and graph theory, we propose an efficient coordinate-wise minimization algorithm that has a closed form solution for each block subproblem. Our experiments show that the proposed optimization method is orders of magnitude faster than its competitors. We also provide a nonlinear extension that is able to achieve better generalization than existing methods. |

Unsupervised Domain Adaptation with a Relaxed Covariate Shift Assumption
T. Adel, H. Zhao and A. Wong
In Proceedings of the 31th AAAI Conference on Artificial Intelligence ( AAAI
2017)
[abs] [pdf] Domain adaptation addresses learning tasks where training is performed on data from one domain whereas testing is performed on data belonging to a different but related domain. Assumptions about the relationship between the source and target domains should lead to tractable solutions on the one hand, and be realistic on the other hand. Here we propose a generative domain adaptation model that allows for modeling different assumptions about this relationship, among which is a newly introduced assumption that replaces covariate shift with a possibly more realistic assumption without losing tractability due to the efficient variational inference procedure developed. In addition to the ability to model less restrictive relationships between source and target, modeling can be performed without any target labeled data (unsupervised domain adaptation). We also provide a Rademacher complexity bound of the proposed algorithm. We evaluate the model on the Amazon reviews and the CVC pedestrian detection datasets. |

Frank-Wolfe Optimization for Symmetric-NMF under Simplicial Constraint
H. Zhao and G. Gordon
In Proceedings of the 34th Conference on Uncertainty in Artificial Intelligence ( UAI 2018)
[abs] [pdf] [supplement] Symmetric nonnegative matrix factorization has found abundant applications in various domains by providing a symmetric low-rank decomposition of nonnegative matrices. In this paper we propose a Frank-Wolfe (FW) solver to optimize the symmetric nonnegative matrix factorization problem under a simplicial constraint, which has recently been proposed for probabilistic clustering. Compared with existing solutions, this algorithm is simple to implement, and has no hyperparameters to be tuned. Building on the recent advances of FW algorithms in nonconvex optimization, we prove an $O(1/\eps^2)$ convergence rate to $\eps$-approximate KKT points, via a tight bound $\Theta(n^2)$ on the curvature constant, which matches the best known result in unconstrained nonconvex setting using gradient methods. Numerical results demonstrate the effectiveness of our algorithm. As a side contribution, we construct a simple nonsmooth convex problem where the FW algorithm fails to converge to the optimum. This result raises an interesting question about necessary conditions of the success of the FW algorithm on convex problems. |

SoF: Soft-Cluster Matrix Factorization for Probabilistic Clustering
H. Zhao, P. Pouart, Y. Zhang and M. Lysy
In Proceedings of the 29th AAAI Conference on Artificial Intelligence ( AAAI
2015)
[abs] [pdf] [poster] We propose SoF (Soft-cluster matrix Factorization), a probabilistic clustering algorithm which softly assigns each data point into clusters. Unlike model-based clustering algorithms, SoF does not make assumptions about the data density distribution. Instead, we take an axiomatic approach to define 4 properties that the probability of co-clustered pairs of points should satisfy. Based on the properties, SoF utilizes a distance measure between pairs of points to induce the conditional co-cluster probabilities. The objective function in our framework establishes an important connection between probabilistic clustering and constrained symmetric Nonnegative Matrix Factorization (NMF), hence providing a theoretical interpretation for NMF-based clustering algorithms. To optimize the objective, we derive a sequential minimization algorithm using a penalty method. Experimental results on both synthetic and real-world datasets show that SoF significantly outperforms previous NMF-based algorithms and that it is able to detect non-convex patterns as well as cluster boundaries. |

On Strategyproof Conference Peer Review
Y. Xu*, H. Zhao*, X. Shi and N. B. Shah
In Proceedings of the 28th International Joint Conference on Artificial Intelligence ( IJCAI 2019)
[abs] [pdf] [supplement] [Full arXiv version] We consider peer review under a conference setting where there are conflicts between the reviewers and the submissions. Under such conflicts, reviewers can manipulate their reviews in a strategic manner to influence the final rankings of their own papers. Present-day peer-review systems are not designed to guard against such strategic behavior, beyond minimal (and insufficient) checks such as not assigning a paper to a conflicted reviewer. In this work, we address this problem through the lens of social choice, and present a theoretical framework for strategyproof and efficient peer review. Given the conflict graph which satisfies a simple property, we first present and analyze a flexible framework for reviewer-assignment and aggregation for the reviews that guarantees not only strategyproofness but also a natural efficiency property (unanimity). Our framework is based on the so-called partitioning method, and can be treated as a generalization of this type of method to conference peer review settings. We then empirically show that the requisite property on the (authorship) conflict graph is indeed satisfied in the ICLR-17 submissions data, and further demonstrate a simple trick to make the partitioning method more practically appealing under conference peer-review settings. Finally, we complement our positive results with negative theoretical results where we prove that under slightly stronger requirements, it is impossible for any algorithm to be both strategyproof and efficient. |

Convolutional-Recurrent Neural Networks for Speech Enhancement
H. Zhao, S. Zarar, I. Tashev and C. H. Lee
IEEE International Conference on Acoustics, Speech and Signal Processing ( ICASSP
2018, Oral)
[abs] [pdf] [slides] We propose an end-to-end model based on convolutional and recurrent neural networks for speech enhancement. Our model is purely data-driven and does not make any assumptions about the type or the stationarity of the noise. In contrast to existing methods that use multilayer perceptrons (MLPs), we employ both convolutional and recurrent neural network architectures. Thus, our approach allows us to exploit local structures in both the frequency and temporal domains. By incorporating prior knowledge of speech signals into the design of model structures, we build a model that is more data-efficient and achieves better generalization on both seen and unseen noise. Based on experiments with synthetic data, we demonstrate that our model outperforms existing methods, improving PESQ by up to 0.6 on seen noise and 0.64 on unseen noise. |

Active Learning of Strict Partial Orders: A Case Study on Concept Prerequisite
Relations
C. Liang, J. Ye, H. Zhao, B. Pursel and C. Lee Giles
In Proceedings of the 12th International Conference on Educational Data Mining ( EDM 2019)
[abs] [pdf] Strict partial order is a mathematical structure commonly seen in relational data. One obstacle to extracting such type of relations at scale is the lack of large scale labels for building effective data-driven solutions. We develop an active learning framework for mining such relations subject to a strict order. Our approach incorporates relational reasoning not only in finding new unlabeled pairs whose labels can be deduced from an existing label set, but also in devising new query strategies that consider the relational structure of labels. Our experiments on concept prerequisite relations show our proposed framework can substantially improve the classification performance with the same query budget compared to other baseline approaches. |

Self-Adaptive Hierarchical Sentence Model
H. Zhao, Z. Lu and P. Poupart
In Proceedings of the 24th International Joint Conference on Artificial Intelligence ( IJCAI 2015)
[abs] [pdf] [slides] [poster] [code] The ability to accurately model a sentence at varying stages (e.g., word-phrase-sentence) plays a central role in natural language processing. As an effort towards this goal we propose a self-adaptive hierarchical sentence model (AdaSent). AdaSent effectively forms a hierarchy of representations from words to phrases and then to sentences through recursive gated local composition of adjacent segments. We design a competitive mechanism (through gating networks) to allow the representations of the same sentence to be engaged in a particular learning task (e.g., classification), therefore effectively mitigating the gradient vanishing problem persistent in other recursive models. Both qualitative and quantitative analysis shows that AdaSent can automatically form and select the representations suitable for the task at hand during training, yielding superior classification performance over competitor models on 5 benchmark data sets. |

Global Network Alignment in the Context Of Aging
F. Faisal, H. Zhao and T. Milenkovic
IEEE/ACM Transactions on Computational Biology and Bioinformatics ( IEEE/ACM TCBB
2014)
In Proceedings of the 4th ACM International Conference on Bioinformatics, Computational Biology and Biomedicine ( ACM BCB 2013)
[abs] [pdf] [supplement] Analogous to sequence alignment, network alignment (NA) can be used to transfer biological knowledge across species between conserved network regions. NA faces two algorithmic challenges: 1) Which cost function to use to capture “similarities” between nodes in different networks? 2) Which alignment strategy to use to rapidly identify “high-scoring” alignments from all possible alignments? We “break down” existing state-of-the-art methods that use both different cost functions and different alignment strategies to evaluate each combination of their cost functions and alignment strategies. We find that a combination of the cost function of one method and the alignment strategy of another method beats the existing methods. Hence, we propose this combination as a novel superior NA method. Then, since human aging is hard to study experimentally due to long lifespan, we use NA to transfer aging-related knowledge from well annotated model species to poorly annotated human. By doing so, we produce novel human aging-related knowledge, which complements currently available knowledge about aging that has been obtained mainly by sequence alignment. We demonstrate significant similarity between topological and functional properties of our novel predictions and those of known aging-related genes. We are the first to use NA to learn more about aging. |

Conditional Learning of Fair Representations
H. Zhao, A. Coston, T. Adel and G. Gordon
In Proceedings of the 8th International Conference on Learning Representations ( ICLR 2020, Spotlight)
NeurIPS 2019 Workshop on Machine Learning with Guarantees ( NeurIPS 2019)
[abs] [pdf] [video] [slides] [code] |

Continual Learning with Adaptive Weights (CLAW)
T. Adel, H. Zhao and R. E. Turner
In Proceedings of the 8th International Conference on Learning Representations ( ICLR 2020)
[abs] [pdf] [poster] |

Inherent Tradeoffs in Learning Fair Representations
H. Zhao and G. Gordon
In Proceedings of the 33rd Advances in Neural Information Processing Systems ( NeurIPS 2019)
[abs] [pdf] [poster] [slides] [blog] |

Learning Neural Networks with Adaptive Regularization
H. Zhao*, Y. H. Tsai*, R. Salakhutdinov and G. Gordon
In Proceedings of the 33rd Advances in Neural Information Processing Systems ( NeurIPS 2019)
[abs] [pdf] [poster] [slides] [code] |

On Strategyproof Conference Peer Review
Y. Xu*, H. Zhao*, X. Shi and N. B. Shah
In Proceedings of the 28th International Joint Conference on Artificial Intelligence ( IJCAI 2019)
[abs] [pdf] [supplement] [Full arXiv version] |

Efficient Multitask Feature and Relationship Learning
H. Zhao, O. Stretcu, A. Smola and G. Gordon
In Proceedings of the 35th Conference on Uncertainty in Artificial Intelligence ( UAI 2019)
Also In Learning with Limited Labeled Data: Weak Supervision and Beyond workshop at NIPS 2017 [abs] [pdf] [supplement] [poster] |

On Learning Invariant Representations for Domain Adaptation
H. Zhao, R. Tachet, K. Zhang and G. Gordon
In Proceedings of the 36th International Conference on Machine Learning ( ICML
2019, Long Oral)
[abs] [pdf] [supplement] [poster] [slides] [blog] |

Active Learning of Strict Partial Orders: A Case Study on Concept Prerequisite
Relations
C. Liang, J. Ye, H. Zhao, B. Pursel and C. Lee Giles
In Proceedings of the 12th International Conference on Educational Data Mining ( EDM 2019)
[abs] [pdf] |

Deep Generative and Discriminative Domain Adaptation
H. Zhao, J. Hu, Z. Zhu, A. Coates and G. Gordon
In Proceedings of the 18th International Conference on Autonomous Agents and Multiagent Systems ( AAMAS 2019)
[abs] [pdf] [poster] |

Adversarial Multiple Source Domain Adaptation
H. Zhao*, S. Zhang*, G. Wu, J. Costeira, J. Moura and G. Gordon
In Proceedings of the 32nd Advances in Neural Information Processing Systems ( NeurIPS 2018)
[abs] [pdf] [supplement] [poster] [code] |

Frank-Wolfe Optimization for Symmetric-NMF under Simplicial Constraint
H. Zhao and G. Gordon
In Proceedings of the 34th Conference on Uncertainty in Artificial Intelligence ( UAI 2018)
[abs] [pdf] [supplement] |

Convolutional-Recurrent Neural Networks for Speech Enhancement
H. Zhao, S. Zarar, I. Tashev and C. H. Lee
IEEE International Conference on Acoustics, Speech and Signal Processing ( ICASSP
2018, Oral)
[abs] [pdf] [slides] |

Linear Time Computation of Moments in Sum-Product Networks
H. Zhao and G. Gordon
In Proceedings of the 31st Advances in Neural Information Processing Systems ( NIPS 2017)
[abs] [pdf] [poster] |

Unsupervised Domain Adaptation with a Relaxed Covariate Shift Assumption
T. Adel, H. Zhao and A. Wong
In Proceedings of the 31th AAAI Conference on Artificial Intelligence ( AAAI
2017)
[abs] [pdf] |

A Unified Approach for Learning the Parameters of Sum-Product Networks
H. Zhao, P. Poupart and G. Gordon
In Proceedings of the 30th Advances in Neural Information Processing Systems ( NIPS 2016)
[abs] [pdf] [supplement] [poster] [code] |

Online Algorithms for Sum-Product Networks with Continuous Variables
P. Jaini, A. Rashwan, H. Zhao, Y. Liu, E. Banijamali, Z. Chen and P. Poupart
In Proceedings of the 8th International Conference on Probabilistic Graphical Models ( PGM 2016)
[abs] [pdf] |

Collapsed Variational Inference for Sum-Product Networks
H. Zhao, T. Adel, G. Gordon and B. Amos
In Proceedings of the 33rd International Conference on Machine Learning ( ICML
2016)
[abs] [pdf] [poster] [slides] [code] |

Online and Distributed Bayesian Moment Matching for Parameter Learning in
Sum-Product Networks
A. Rashwan, H. Zhao and P. Poupart
In Proceedings of the 19th International Conference on Artificial Intelligence and Statistics ( AISTATS 2016)
[abs] [pdf] |

On the Relationship between Sum-Product Networks and Bayesian Networks
H. Zhao, M. Melibari and P. Poupart
In Proceedings of the 32nd International Conference on Machine Learning ( ICML
2015)
[abs] [pdf] [supplement] [Full arXiv version] [slides] [poster] |

Self-Adaptive Hierarchical Sentence Model
H. Zhao, Z. Lu and P. Poupart
In Proceedings of the 24th International Joint Conference on Artificial Intelligence ( IJCAI 2015)
[abs] [pdf] [slides] [poster] [code] |

SoF: Soft-Cluster Matrix Factorization for Probabilistic Clustering
H. Zhao, P. Pouart, Y. Zhang and M. Lysy
In Proceedings of the 29th AAAI Conference on Artificial Intelligence ( AAAI
2015)
[abs] [pdf] [poster] |

Global Network Alignment in the Context Of Aging
F. Faisal, H. Zhao and T. Milenkovic
IEEE/ACM Transactions on Computational Biology and Bioinformatics ( IEEE/ACM TCBB
2014)
In Proceedings of the 4th ACM International Conference on Bioinformatics, Computational Biology and Biomedicine ( ACM BCB 2013)
[abs] [pdf] [supplement] |

Adversarial Privacy Preservation under Attribute Inference Attack
H. Zhao*, J. Chi*, Y. Tian and G. Gordon
NeurIPS 2019 Workshop on Machine Learning with Guarantees ( NeurIPS 2019)
[abs] [pdf] With the prevalence of machine learning services, crowdsourced data containing sensitive information poses substantial privacy challenges. Existing work focusing on protecting against membership inference attacks under the rigorous framework of differential privacy are vulnerable to attribute inference attacks. In light of the current gap between theory and practice, we develop a novel theoretical framework for privacy-preservation under the attack of attribute inference. Under our framework, we propose a minimax optimization formulation to protect the given attribute and analyze its privacy guarantees against arbitrary adversaries. On the other hand, it is clear that privacy constraint may cripple utility when the protected attribute is correlated with the target variable. To this end, we also prove an information-theoretic lower bound to precisely characterize the fundamental trade-off between utility and privacy. Empirically, we extensively conduct experiments to corroborate our privacy guarantee and validate the inherent trade-offs in different privacy preservation algorithms. Our experimental results indicate that the adversarial representation learning approaches achieve the best trade-off in terms of privacy preservation and utility maximization. |

Approximate Empirical Bayes for Deep Neural Networks
H. Zhao*, Y. H. Tsai*, R. Salakhutdinov and G. Gordon
In Uncertainty in Deep Learning workshop at UAI ( UAI UDL 2018)
[abs] [pdf] [poster] We propose an approximate empirical Bayes framework and an efficient algorithm for learning the weight matrix of deep neural networks. Empirically, we show the proposed method works as a regularization approach that helps generalization when training neural networks on small datasets. |

Multiple Source Domain Adaptation with Adversarial Learning
H. Zhao*, S. Zhang*, G. Wu, J. Costeira, J. Moura and G. Gordon
In 6th International Conference on Learning Representations ( ICLR 2018 workshop
track)
[abs] [pdf] [poster] While domain adaptation has been actively researched in recent years, most theoretical results and algorithms focus on the single-source-single-target adaptation setting. Naive application of such algorithms on multiple source domain adaptation problem may lead to suboptimal solutions. We propose a new generalization bound for domain adaptation when there are multiple source domains with labeled instances and one target domain with unlabeled instances. Compared with existing bounds, the new bound does not require expert knowledge about the target distribution, nor the optimal combination rule for multisource domains. Interestingly, our theory also leads to an efficient learning strategy using adversarial neural networks: we show how to interpret it as learning feature representations that are invariant to the multiple domain shifts while still being discriminative for the learning task. To this end, we propose two models, both of which we call multisource domain adversarial networks (MDANs): the first model optimizes directly our bound, while the second model is a smoothed approximation of the first one, leading to a more data-efficient and task-adaptive model. The optimization tasks of both models are minimax saddle point problems that can be optimized by adversarial training. To demonstrate the effectiveness of MDANs, we conduct extensive experiments showing superior adaptation performance on three real-world datasets: sentiment analysis, digit classification, and vehicle counting. |

Discovering Order in Unordered Datasets: Generative Markov Networks
Y. H. Tsai, H. Zhao, R. Salakhutdinov and N. Jojic
In Time Series workshop at NIPS ( NIPS TSW 2017)
[abs] [pdf] [slides] [poster] The assumption that data samples are independently identically distributed is the backbone of many learning algorithms. Nevertheless, datasets often exhibit rich structures in practice, and we argue that there exist some unknown orders within the data instances. Aiming to find such orders, we introduce a novel Generative Markov Network (GMN) which we use to extract the order of data instances automatically. Specifically, we assume that the instances are sampled from a Markov chain. Our goal is to learn the transitional operator of the chain as well as the generation order by maximizing the generation probability under all possible data permutations. One of our key ideas is to use neural networks as a soft lookup table for approximating the possibly huge, but discrete transition matrix. This strategy allows us to amortize the space complexity with a single model and make the transitional operator generalizable to unseen instances. To ensure the learned Markov chain is ergodic, we propose a greedy batch-wise permutation scheme that allows fast training. Empirically, we evaluate the learned Markov chain by showing that GMNs are able to discover orders among data instances and also perform comparably well to state-of-the-art methods on the one-shot recognition benchmark task. |

A Sober Look at Spectral Learning
H. Zhao and P. Poupart
In Method of Moments and Spectral Learning workshop at ICML ( ICML MM 2014)
[abs] [pdf] [slides] [poster] [code] Spectral learning recently generated lots of excitement in machine learning, largely because it is the first known method to produce consistent estimates (under suitable conditions) for several latent variable models. In contrast, maximum likelihood estimates may get trapped in local optima due to the non-convex nature of the likelihood function of latent variable models. In this paper, we do an empirical evaluation of spectral learning (SL) and expectation maximization (EM), which reveals an important gap between the theory and the practice. First, SL often leads to negative probabilities. Second, EM often yields better estimates than spectral learning and it does not seem to get stuck in local optima. We discuss how the rank of the model parameters and the amount of training data can yield negative probabilities. We also question the common belief that maximum likelihood estimators are necessarily inconsistent. |

I enjoy sketching and calligraphy at my spare time. If I have a long vacation, I also enjoy traveling. |