Literary novels push the limits of natural language processing. While much work in NLP has been heavily optimized toward the narrow domains of news and Wikipedia, literary novels are an entirely different animal--the long, complex sentences in novels strain the limits of syntactic parsers with super-linear computational complexity, their use of figurative language challenges representations of meaning based on neo-Davidsonian semantics, and their long length (ca. 100,000 words on average) rules out existing solutions for problems like coreference resolution that expect a small set of candidate antecedents.
At the same time, fiction drives computational research questions that are uniquely interesting to that domain. In this talk, I'll outline some of the opportunities that NLP presents for research in the quantitative analysis of culture--including measuring the disparity in attention given to characters as a function of their gender over two hundred years of literary history (Underwood et al. 2018)--and describe our progress to date on two problems essential to a more complex representation of plot: recognizing the entities in literary texts, such as the characters, locations, and spaces of interest (Bamman et al. 2019) and identifying the events that are depicted as having transpired (Sims et al. 2019). Both efforts involve the creation of a new dataset of 200,000 words evenly drawn from 100 different English-language literary texts and building computational models to automatically identify each phenomenon.
This is joint work with Matt Sims, Ted Underwood, Sabrina Lee, Jerry Park, Sejal Popat and Sheng Shen.
About the Speaker