Newsgroups: comp.ai.nat-lang
Path: cantaloupe.srv.cs.cmu.edu!das-news2.harvard.edu!news2.near.net!howland.reston.ans.net!math.ohio-state.edu!scipio.cyberstore.ca!vanbc.wimsey.com!news.bc.net!newsserver.sfu.ca!fornax!jamie
From: jamie@cs.sfu.ca (Jamie Andrews)
Subject: Re: best parser???
Message-ID: <1994Nov30.173823.29205@cs.sfu.ca>
Organization: Faculty of Applied Science, Simon Fraser University
References: <MAGERMAN.94Nov15175620@platypus.bbn.com> <QOBI.94Nov23151624@qobi.ai> <1994Nov25.183438.23764@cs.sfu.ca> <TED.94Nov29161620@ilios.crl.nmsu.edu>
Date: Wed, 30 Nov 1994 17:38:23 GMT
Lines: 21

In article <TED.94Nov29161620@ilios.crl.nmsu.edu>,
Ted Dunning <ted@crl.nmsu.edu> wrote:
>	Also, when we evaluate entire systems only along the lines
>   Jeff suggests, -- ...  -- we are doomed to always having to build
>   an entire system, put it into operation on a real task, and do
>   laborious human measurements in order to evaluate it.
>
>but this is a real point.  end to end evaluation is a problem.  but
>jeff's point that getting better accuracy in creating a particular
>internal representation does not necessarily imply better system
>performance is dead on.

     You missed the next sentence in my posting, in which I
agreed that end-to-end evaluation is important in some contexts.
I disagree with your interpretation of Jeff's post; his post was
rather a categorical denial that evaluation of a parser in
isolation can be useful -- no "not necessarily" about it.

--Jamie.
  jamie@cs.sfu.ca
"Make sure Reality is not twisted after insertion"
