Cardiff University | Prifysgol Caerdydd ORCA
Online Research @ Cardiff 
WelshClear Cookie - decide language by browser settings

Don't patronize me! an annotated dataset with patronizing and condescending language towards vulnerable communities

Perez Almendros, Carla, Espinosa-Anke, Luis ORCID: https://orcid.org/0000-0001-6830-9176 and Schockaert, Steven 2020. Don't patronize me! an annotated dataset with patronizing and condescending language towards vulnerable communities. Presented at: The 28th International Conference on Computational Linguistics (COLING 2020), Virtual, 8-13 December 2020. Proceedings of the 28th International Conference on Computational Linguistics. Barcelona, Spain: International Committee on Computational Linguistics, 5891–5902. 10.18653/v1/2020.coling-main.518

Full text not available from this repository.

Abstract

In this paper, we introduce a new annotated dataset which is aimed at supporting the development of NLP models to identify and categorize language that is patronizing or condescending towards vulnerable communities (e.g. refugees, homeless people, poor families). While the prevalence of such language in the general media has long been shown to have harmful effects, it differs from other types of harmful language, in that it is generally used unconsciously and with good intentions. We furthermore believe that the often subtle nature of patronizing and condescending language (PCL) presents an interesting technical challenge for the NLP community. Our analysis of the proposed dataset shows that identifying PCL is hard for standard NLP models, with language models such as BERT achieving the best results.

Item Type: Conference or Workshop Item (Paper)
Date Type: Publication
Status: Published
Schools: Computer Science & Informatics
Data Innovation Research Institute (DIURI)
Publisher: International Committee on Computational Linguistics
Last Modified: 10 Nov 2022 09:59
URI: https://orca.cardiff.ac.uk/id/eprint/145304

Actions (repository staff only)

Edit Item Edit Item