Cardiff University | Prifysgol Caerdydd ORCA
Online Research @ Cardiff 
WelshClear Cookie - decide language by browser settings

Exploring cross-cultural differences in English hate speech annotations: From dataset construction to analysis

Lee, Nayeon, Jung, Chani, Myung, Junho, Jin, Jiho, Camacho Collados, Jose ORCID: https://orcid.org/0000-0003-1618-7239, Kim, Juho and Oh, Alice 2024. Exploring cross-cultural differences in English hate speech annotations: From dataset construction to analysis. Presented at: 2024 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Mexico City, Mexico, 16-21 June 2024. Published in: Duh, Kevin, Gomez, Helena and Bethard, Steven eds. Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers). pp. 4205-4224. 10.18653/v1/2024.naacl-long.236

[thumbnail of 2024.naacl-long.236v2.pdf]
Preview
PDF - Published Version
Available under License Creative Commons Attribution.

Download (1MB) | Preview

Abstract

Most hate speech datasets neglect the cultural diversity within a single language, resulting in a critical shortcoming in hate speech detection. To address this, we introduce CREHate, a CRoss-cultural English Hate speech dataset. To construct CREHate, we follow a two-step procedure: 1) cultural post collection and 2) cross-cultural annotation. We sample posts from the SBIC dataset, which predominantly represents North America, and collect posts from four geographically diverse English-speaking countries (Australia, United Kingdom, Singapore, and South Africa) using culturally hateful keywords we retrieve from our survey. Annotations are collected from the four countries plus the United States to establish representative labels for each country. Our analysis highlights statistically significant disparities across countries in hate speech annotations. Only 56.2% of the posts in CREHate achieve consensus among all countries, with the highest pairwise label difference rate of 26%. Qualitative analysis shows that label disagreement occurs mostly due to different interpretations of sarcasm and the personal bias of annotators on divisive topics. Lastly, we evaluate large language models (LLMs) under a zero-shot setting and show that current LLMs tend to show higher accuracies on Anglosphere country labels in CREHate.Our dataset and codes are available at: https://github.com/nlee0212/CREHate

Item Type: Conference or Workshop Item (Paper)
Status: Published
Schools: Computer Science & Informatics
Date of First Compliant Deposit: 15 October 2024
Last Modified: 15 Oct 2024 15:20
URI: https://orca.cardiff.ac.uk/id/eprint/172499

Actions (repository staff only)

Edit Item Edit Item

Downloads

Downloads per month over past year

View more statistics