Recently, many cloud based machine learning (ML) services have been launched, including Microsoft Azure Machine Learning, GraphLab, Google Prediction API and Ersatz Labs. Cloud ML makes machine learning very easy to use for common users. However, it invades the privacy and security of users' data. How to protect users' privacy in cloud ML is a big challenge. In this work, we focus on neural network which is a backbone model in machine learning, and investigated how to perform privacy-preserving neural network prediction on encrypted data. Users encrypt their data before uploading them to the cloud. Cloud performs neural network predictions over the encrypted data and obtains the results which are also in encrypted form that the cloud cannot decipher. The encrypted results are sent back to users and users do the decryption to get the plaintext results. In this process, cloud never knows users' input data and output results since they are both encrypted. This achieves a strong protection of users' privacy. Meanwhile, with the help of homomorphic encryption, predictions made on encrypted data are nearly the same as those on plaintext data. The predictive performance of neural network is guaranteed.
Pengtao Xie is a graduate student in Language Technology Institute, School of Computer Science, working with Professor Eric Xing. His research interests lie in the interaction between machine learning and systems. Specifically, he designs distributed systems to support large scale machine learning and studies how to run machine learning on encrypted data to protect users' privacy. He received M.E. from Tsinghua University in 2013 and B.E. from Sichuan University in 2010.