Abstractive Text Summarization using Sequence-to-sequence RNNs and Beyond

Ramesh Nallapati1, Bowen Zhou2, Cicero dos Santos3, Caglar Gulcehre4, Bing Xiang5
1IBM T. J. Watson Research Center, 2IBM Research, 3IBM Watson, 4Universite de Montreal, 5IBM


Abstract

In this work, we model abstractive text summarization using Attentional Encoder-Decoder Recurrent Neural Networks, and show that they achieve state-of- the-art performance on two different corpora. We propose several novel models that address critical problems in summarization that are not adequately modeled by the basic architecture, such as modeling key-words, capturing the hierarchy of sentence-to-word structure, and emiting words that are rare or unseen at training time. Our work shows that many of our proposed models contribute to further improvement in performance. We also propose a new dataset consisting of multisentence summaries, and establish performance benchmarks for further research.