Hao Zhang

Ph.D. Student
The Robotics Institute
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213, USA

Email: hao AT


I am currently a Ph.D. student in the Robotics Institute, Carnegie Mellon University. My advisor is Prof. Eric Xing. I received the M.S. degree from Computer Science and Engineer Department in Shanghai Jiao Tong University in 2014. Before that, I completed my Bachelor's degree from 2008 to 2011 at South China University of Technology, majoring in Computer Science.

My research interest is in scalable and structured machine learning, deep learning, and their applications in computer vision and natural lauguage processing. I (co-)design models, algorithms and systems to enable machine learning to be applied/delopyed on larger-scale problems and applications. I also briefly worked on machine learning for medical brain computer interface.


Cavs: A Vertex-centric Programming Interface for Dynamic Neural Networks
Hao Zhang*, Shizhen Xu*, Graham Neubig, Qirong Ho, Guangwen Yang, and Eric P. Xing (* indicates equal contributions)
AISys@SOSP'17, MLSys@NIPS'17
Generative Semantic Manipulation with Contrasting GAN
arXiv preprint, 2017
Structured Generative Adversarial Networks
Hao Zhang*, Zhijie Deng*, Xiaodan Liang, Luona Yang, Shizhen Xu, Jun Zhu, and Eric P. Xing (* indicates equal contributions)
NIPS 2017(Nvidia Pioneer Research Award!)
Poseidon: An Efficient Communication Architecture for Distributed Deep Learning on GPU Clusters
ATC 2017 (Oral)
Recurrent Topic-Transition GAN for Visual Paragraph Generation
Xiaodan Liang, Zhiting Hu, Hao Zhang, Chuang Gan, and Eric P. Xing
ICCV 2017
SCAN: Structure Correcting Adversarial Network for Chest X-rays Organ Segmentation
Wei Dai, Joseph Doyle, Xiaodan Liang, Hao Zhang, Nanqing Dong, Yuan Li, and Eric P. Xing
arXiv preprint, 2017
ZM-Net: Real-time Zero-shot Image Manipulation Network
arXiv preprint, 2017
Poseidon: A System Architecture for Efficient GPU-based Deep Learning on Multiple Machines
ATC 2016 (Poster), MLSys Workeshop@ICML 2016 (Spotlight)
Learning Concept Taxonomies from Multi-modal Data
Hao Zhang, Zhiting Hu, Yuntian Deng, Mrinmaya Sachan, Zhicheng Yan, and Eric P. Xing
ACL 2016 (Oral)
GeePS: Scalable Deep Learning on Distributed GPUs with a GPU-specialized Parameter Server
EuroSys 2016
Combining the Best of Convolutional Layers and Recurrent Layers: A Hybrid Network for Semantic Segmentation
Zhicheng Yan, Hao Zhang, Yangqing Jia, Thomas Breuel, Yizhou Yu
arXiv preprint, 2016
Automatic Photo Adjustment Using Deep Learning
TOG Vol.35 No.2, ICCP 2016 (Invited Poster)
On the Reducibility of Submodular Functions
Jincheng Mei, Hao Zhang, and Baoliang Lu
HD-CNN: Hierarchical Deep Convolutional Neural Network for Large Scale Visual Recognition
Zhicheng Yan, Hao Zhang, Robinson Piramuthu, Vignesh Jagadeesh, Dennis DeCoste, Wei Di, and Yizhou Yu
ICCV 2015
Dynamic Topic Modeling for Monitoring Market Competition from Online Text and Image Data
Hao Zhang, Gunhee Kim, and Eric P. Xing
KDD 2015 (Oral)
A Boosting-based Spatial-Spectral Model for Stroke Patients' EEG Analysis in Rehabilitation Training
Ye Liu*, Hao Zhang*, and Liqing Zhang
(* indicates equal contribution)
ECAI 2014, IEEE TNSRE 2015
Gaussian Mixture Modeling in Stroke Patients' Rehabilitation EEG Data Analysis
Hao Zhang, Ye Liu, Jianyi Liang, Jianting Cao, and Liqing Zhang
EMBC 2013
A Tensor-Based Scheme for Stroke Patients' Motor Imagery EEG Analysis in BCI-FES Rehabilitation Training
Ye Liu, Mingfen Li, Hao Zhang, Junhua Li, Jie Jia, Yi, Wu, Jianting Cao, and Liqing Zhang
EMBC 2013, JNM 2013


I build or contribute to many projects for large-scale machine learning, some of them are open sourced.
  • We have significatnly improved Poseidon to v2.0, a general-purpose communication architecture for distributed deep learning. Poseidon v2.0 supports both TensorFlow and Caffe programs. It delivers linear scalability with addtional GPU nodes on up to 32 nodes, even under limited Ethernet bandwidth.
  • DyNet is a neural network library developed by Carnegie Mellon University and many others. It is written in C++ (with bindings in Python) and is designed to be efficient when run on either CPU or GPU, and to work well with networks that have dynamic structures that change for every training instance.
  • GeePS is a new parameter server for data-parallel deep learning on GPUs. GeePS addresses the problem of limited GPU memory: GeePS's explicit GPU memory management support enables GPU-based training of neural networks that are much larger than the GPU memory.
  • Poseidon is a system architecture for distributed deep learning on GPU clusters. Poseidon v1.0 includes a Bosen-based implementation to provide distributed acceleartion for Caffe.

Working Experience

  • Research Intern, Microsoft Research Asia, 2013 - 2014
  • Software Engineer Intern, Microsoft Shanghai, 2011 - 2012