Hongyang Zhang 张弘扬
I am a second-year Ph.D. student at Carnegie Mellon University, working with Prof. Maria-Florina Balcan.
Before joining CMU, I graduated from Peking University in 2015. I was fortunate to work with Prof. Zhouchen Lin and Prof. Chao Zhang.
My research interests broadly include theories of machine learning and algorithms, such as convex/non-convex optimization, low-rank subspace learning, noise-tolerant active learning, life-long learning, etc.
I am organizing the Learning Theory Reading Group in CMU since 2017. [subscribe] [volunteer to talk]
2016/10/7. I got NIPS travel award. Thanks NIPS!
2016/8/12. One paper was accepted to NIPS 2016.
2016/5/3. One paper was accepted to IEEE Transactions on Information Theory.
2016/4/26. One paper was accepted to COLT 2016.
Machine Learning Theory, Information Theory, Statistics, and Optimization.
With Maria-Florina Balcan, Yingyu Liang, David P. Woodruff (α-β order). "Optimal Sample Complexity for Matrix Completion and Related Problems via L2-Regularization", 2017.
With Maria-Florina Balcan (α-β order). "S-Concave Distributions: Towards Broader Distributions for Noise-Tolerant and Sample-Efficient Learning Algorithms", 2017. [arXiv:1703.07758]
With Maria-Florina Balcan, Travis Dick, Yingyu Liang, Wenlong Mou (α-β order). "Differentially Private Clustering in High-Dimensional Euclidean Spaces", 2017.
Yichong Xu, Hongyang Zhang, Kyle Miller, Aarti Singh, Artur Dubrawski. "Noise-Tolerant Interactive Learning from Pairwise Comparisons with Near-Minimal Label Complexity", 2017. [arXiv:1704.05820]
With Zhouchen Lin. “Low Rank Models for Visual Analysis: Theories, Algorithms and Applications”, Elsevier Press, in print, coming soon.
Hongyang Zhang, Zhouchen Lin, Chao Zhang. "Completing Low-Rank Matrices with Corrupted Samples from Few Coefficients in General Basis”, IEEE Transactions on Information Theory 62, PP.4748-4768, 2016. [pdf]
Hongyang Zhang, Zhouchen Lin, Chao Zhang, Junbin Gao. "Relations Among Some Low Rank Subspace Recovery Models”, Neural Computation 27, PP.1915-1950, 2015. [pdf]
Hongyang Zhang, Zhouchen Lin, Chao Zhang, Junbin Gao. "Robust Latent Low Rank Representation for Subspace Clustering”, Neurocomputing 145, PP.369-373, 2014. [pdf]
Hongyang Zhang, Shan You, Zhouchen Lin, Chao Xu. "Fast Compressive Phase Retrieval under Bounded Noise", AAAI 2017, San Francisco, USA. [pdf] [supp]
With Maria-Florina Balcan (α-β order). "Noise-Tolerant Life-Long Matrix Completion via Adaptive Sampling", NIPS 2016, Barcelona, Spain. [pdf] [supp] [spotlight]
With Pranjal Awasthi, Maria-Florina Balcan, Nika Haghtalab (α-β order). “Learning and 1-bit Compressed Sensing under Asymmetric Noise”, COLT 2016, New York, USA. [pdf]
Hongyang Zhang, Zhouchen Lin, Chao Zhang, Edward Chang. “Exact Recoverability of Robust PCA via Outlier Pursuit with Tight Recovery Bounds”, AAAI 2015, Austin, USA. [pdf] [supp]
Xin Shi, Chao Zhang, Fangyun Wei, Hongyang Zhang, Yiyuan She. “Manifold-Regularized Selectable Factor Extraction for Semi-Supervised Image Classification”, BMVC 2015, Swansea, UK. [pdf]
Hongyang Zhang, Zhouchen Lin, Chao Zhang. “A Counterexample for the Validity of Using Nuclear Norm as A Convex Surrogate of Rank”, ECML/PKDD 2013, Prague, Czech. [pdf]
With Pranjal Awasthi, Maria-Florina Balcan, Nika Haghtalab (α-β order). “Learning and 1-bit Compressed Sensing under Asymmetric Noise”, The Workshop on “Advances in Non-Convex Analysis and Optimization” in ICML 2016, New York, USA. [pdf]
Simon S. Du, Yichong Xu, Yuan Li, Hongyang Zhang, Aarti Singh, Pulkit Grover. “Novel Quantization Strategies for Linear Prediction with Guarantees”, The Workshop on “On-Device Intelligence” in ICML 2016, New York, USA. [pdf]
Journal Refereeing: Journal of Machine Learning Research, IEEE Transactions on Pattern Analysis and Machine Intelligence, IEEE Transactions on Signal Processing, IEEE Transactions on Cybernetics, IEEE Signal Processing Letters, IEEE Access, Neurocomputing.
Conference Refereeing: AAAI 2016, ICML 2016, NIPS 2016, IJCAI 2017 (PC member), STOC 2017, NIPS 2017 (PC member).
Volunteer: ICML 2016.