Evaluation of Specification Marks Method

Obviously, we can also use different strategies to combine a set of knowledge-based heuristics. For instance, all the heuristics described in the previous section can be applied in order passing to the next heuristic only the remaining ambiguity that previous heuristics were not able to solve.

In order to evaluate the performance of the knowledge-based heuristics previously defined, we used the SemCor collection Miller93semcor, in which all content words are annotated with the most appropriate WordNet sense.In this case, we used a window of fifteen nouns (seven context nouns before and after the target noun).

The results obtained for the specification marks method using the heuristics when applied one by one are shown in Table 2. This table shows the results for polysemous nouns only, and for polysemous and monosemous nouns combined.


Table 2: Results Using Heuristics Applied in Order on SemCor
Heuristics Precision Recall Coverage
Polysemic and monosemic nouns 0.553 0.522 0.943
Only polysemic nouns 0.377 0.311 0.943


The results obtained for the heuristics applied independently are shown in Table 3. As shown, all the heuristics perform differently, providing different precision/recall figures.


Table 3: Results Using Heuristics Applied Independently
Heuristics Precision Recall Coverage
Mono+Poly Polysemic Mono+Poly Polysemic Mono+Poly Polysemic
Spec. Mark Method 0.383 0.300 0.341 0.292 0.975 0.948
Hypernym 0.563 0.420 0.447 0.313 0.795 0.745
Definition 0.480 0.300 0.363 0.209 0.758 0.699
Hyponym 0.556 0.393 0.436 0.285 0.784 0.726
Gloss hypernym 0.555 0.412 0.450 0.316 0.811 0.764
Gloss hyponym 0.617 0.481 0.494 0.358 0.798 0.745
Common specification 0.565 0.423 0.443 0.310 0.784 0.732
Domain WSD 0.585 0.453 0.483 0.330 0.894 0.832


Another possibility is to combine all the heuristics using a majority voting schema Rigau1997. In this simple schema, each heuristic provides a vote, and the method selects the synset that obtains more votes. The results shown in Table 4 illustrate that when the heuristics are working independently, the method achieves a 39.1% recall for polysemous nouns (with full coverage), which represents an improvement of 8 percentual points over the method in which heuristics are applied in order (one by one).


Table 4: Results using majority voting on SemCor
Precision Recall
Mono+Poly Polysemic Mono+Poly Polysemic
Voting heuristics 0.567 0.436 0.546 0.391


We also show in Table 5 the results of our domain heuristic when applied on the English all-words task from SENSEVAL-2. In the table, the polysemy reduction caused by domain clustering can profitably help WSD. Since domains are coarser than synsets, word domain disambiguation (WDD) Magnini2000 can obtain better results than WSD. Our goal is to perform a preliminary domain disambiguation in order to provide an informed search-space reduction.


Table 5: Results of Use of Domain WSD Heuristic
Level WSD Precision Recall
Sense 0.44 0.32
Domain 0.54 0.43