Cardiff University | Prifysgol Caerdydd ORCA
Online Research @ Cardiff 
WelshClear Cookie - decide language by browser settings

The (un)suitability of automatic evaluation metrics for text simplification

Alva Manchego, Fernando, Scarton, Carolina and Specia, Lucia 2021. The (un)suitability of automatic evaluation metrics for text simplification. Computational Linguistics 47 (4) , pp. 861-889. 10.1162/coli_a_00418

[thumbnail of coli_a_00418.pdf] PDF - Published Version
Available under License Creative Commons Attribution Non-commercial No Derivatives.

Download (503kB)

Abstract

In order to simplify sentences, several rewriting operations can be performed, such as replacing complex words per simpler synonyms, deleting unnecessary information, and splitting long sentences. Despite this multi-operation nature, evaluation of automatic simplification systems relies on metrics that moderately correlate with human judgments on the simplicity achieved by executing specific operations (e.g., simplicity gain based on lexical replacements). In this article, we investigate how well existing metrics can assess sentence-level simplifications where multiple operations may have been applied and which, therefore, require more general simplicity judgments. For that, we first collect a new and more reliable data set for evaluating the correlation of metrics and human judgments of overall simplicity. Second, we conduct the first meta-evaluation of automatic metrics in Text Simplification, using our new data set (and other existing data) to analyze the variation of the correlation between metrics’ scores and human judgments across three dimensions: the perceived simplicity level, the system type, and the set of references used for computation. We show that these three aspects affect the correlations and, in particular, highlight the limitations of commonly used operation-specific metrics. Finally, based on our findings, we propose a set of recommendations for automatic evaluation of multi-operation simplifications, suggesting which metrics to compute and how to interpret their scores.

Item Type: Article
Date Type: Publication
Status: Published
Schools: Computer Science & Informatics
Additional Information: This is an open-access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License
Publisher: Association for Computational Linguistics
ISSN: 0891-2017
Date of First Compliant Deposit: 14 February 2022
Date of Acceptance: 28 July 2021
Last Modified: 23 May 2023 22:57
URI: https://orca.cardiff.ac.uk/id/eprint/147256

Citation Data

Cited 3 times in Scopus. View in Scopus. Powered By Scopus® Data

Actions (repository staff only)

Edit Item Edit Item

Downloads

Downloads per month over past year

View more statistics