Publication:
METEOR: An Automatic Metric for MT Evaluation with High Levels of Correlation with Human Judgments
A. Lavie, Abhaya Agarwal • @Workshop on Statistical Machine Translation • 23 June 2007
TLDR: The technical details underlying the Meteor metric are recapped, the latest release includes improved metric parameters and extends the metric to support evaluation of MT output in Spanish, French and German, in addition to English.
Citations: 1104
Abstract: Meteor is an automatic metric for Machine Translation evaluation which has been demonstrated to have high levels of correlation with human judgments of translation quality, significantly outperforming the more commonly used Bleu metric. It is one of several automatic metrics used in this year's shared task within the ACL WMT-07 workshop. This paper recaps the technical details underlying the metric and describes recent improvements in the metric. The latest release includes improved metric parameters and extends the metric to support evaluation of MT output in Spanish, French and German, in addition to English.
Related Fields of Study
loading
Citations
Sort by
Previous
Next
Showing results 1 to 0 of 0
Previous
Next
References
Sort by
Previous
Next
Showing results 1 to 0 of 0
Previous
Next