Wilson et al, EMNLP 2005

From ScribbleWiki: Analysis of Social Media

Jump to: navigation, search

Recognizing contextual polarity in phrase-level sentiment analysis

ACM Poral or Wilson's Copy

This paper explain a method for determining the phrase contextual polarity. Going from document level to sentence or phrase level is necessary in some application (QA, etc.) and also enables more carefull analysis (such as negation, change in polarity, etc.). They provided examples that supports the claim that contextual polarity is prefered over apriori polarity that might be defined in a lexicon. Later in the paper, a simple prior classifier would have a 48% accuracy when assuming the contextual is the same as prior and 76% error are from words assumed to be neurtal because they can be polar in the certain contexts.

For their data, they developed an annotation schema for finding the subjective expressions and used it to annotate an opinion corpus called MPQA. 82% inter-annotator agreement kappa: 0.72 is obtained (or better when eliminating the unkown tags used by annotators). Also, they used a lexicon of subjective words (that they call subjectivity clues) with their polarity specified and they expanded the lexicon with dictionary and thesaurus.

The experiments was to identify the polarity of the expression (expression boundaries not detected but the speculate that this can improve performance). The polarity of the context is determined by the subjectivity clues and can be polar (postivie, negative, both) or neutral. Evaluation is on the manual annotation explained below. Their process is two steps:

First step, is classifying clues in context as neutral or polar. They extracted 28 features summarized below and used Boostexter (AdaBoost.HM) for classification (detailed omitted):

  • Word features (context, prior polarity, etc.)
  • Modification features - linguisitic (Is intensifier, preceeeded by adjective, dependency prase info, etc.)
  • Sentence features (pronoun in sentence, etc.)
  • Document Features (document topic)

Result on polar/neutral F-meatures 63/82 however classifier with just a word token we get slightly better accuracy but 20% lower recall. With all features, best precision and recall is obtained.

In the second step, the clue instances marked as polar are classified into their polarity (postivie, negative, nuetral or both). This time they used 10 features:

  • Word features (token, word polarity)
  • Polarity features (negation, mofied polarity, etc.)

Result on +/-/noth and neutral F-measure are 65/77/16/46 and they show again with feature evaluation that combination of all features yield best performance.

  • Bibtex
@inproceedings{1220619,
author = {Theresa Wilson and Janyce Wiebe and Paul Hoffmann},
title = {Recognizing contextual polarity in phrase-level sentiment analysis},
booktitle = {HLT '05: Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing},
year = {2005},
pages = {347--354},
location = {Vancouver, British Columbia, Canada},
doi = {http://dx.doi.org/10.3115/1220575.1220619},
publisher = {Association for Computational Linguistics},
address = {Morristown, NJ, USA},
}

Annotated by Mehrbod

Views
Personal tools
  • Log in / create account