Tuesday, May 08, 2018. 12:00PM. GCH 6115.

Back to Seminar Schedule

Adams Wei Yu -- Efficient and Effective Models for Machine Reading Comprehension

Abstract: Machine reading comprehension has attracted lots of attentions in the machine learning and natural language processing communities. In this talk, I will introduce two efficient and effective models to approach this task.

Firstly, I will propose a model, LSTM-Jump, that can skip unimportant information in sequential data, mimicking the skimming behavior of human reading. Trained with an efficient reinforcement learning algorithm, this model can be several times faster than a vanilla LSTM in inference time.

Then I will introduce a sequence encoding method that discards recurrent networks, which thus fully supports parallel training and inference. Based on this technique, a new question-answering model, QANet, is proposed. Combined with data augmentation approach via backtranslation, this model achieves No.1 performance in the competitive Stanford Question and Answer Dataset (SQuAD), while being times faster than the prevalent models. Notably, the exact match score of QANet has exceeded human performance.

Based on the following two works: (1) http://aclweb.org/anthology/P17-1172 ; (2) https://arxiv.org/pdf/1804.09541.pdf