Cardiff University | Prifysgol Caerdydd ORCA
Online Research @ Cardiff 
WelshClear Cookie - decide language by browser settings

SemRel2024: A collection of semantic textual relatedness datasets for 13 languages

Ousidhoum, Nedjma, Muhammad, Shamsuddeen, Abdalla, Mohamed, Abdulmumin, Idris, Ahmad, Ibrahim, Ahuja, Sanchit, Aji, Alham, Araujo, Vladimir, Ayele, Abinew, Baswani, Pavan, Beloucif, Meriem, Biemann, Chris, Bourhim, Sofia, Kock, Christine, Dekebo, Genet, Hourrane, Oumaima, Kanumolu, Gopichand, Madasu, Lokesh, Rutunda, Samuel, Shrivastava, Manish, Solorio, Thamar, Surange, Nirmal, Tilaye, Hailegnaw, Vishnubhotla, Krishnapriya, Winata, Genta, Yimam, Seid and Mohammad, Saif 2024. SemRel2024: A collection of semantic textual relatedness datasets for 13 languages. Presented at: SemRel2024, Bangkok, Thailand, 11-16 August 2024. Findings of the Association for Computational Linguistics. Association for Computational Linguistics, 2512 – 2530. 10.18653/v1/2024.findings-acl.147

Full text not available from this repository.

Abstract

Exploring and quantifying semantic relatedness is central to representing language and holds significant implications across various NLP tasks. While earlier NLP research primarily focused on semantic similarity, often within the English language context, we instead investigate the broader phenomenon of semantic relatedness. In this paper, we present SemRel, a new semantic relatedness dataset collection annotated by native speakers across 13 languages: Afrikaans, Algerian Arabic, Amharic, English, Hausa, Hindi, Indonesian, Kinyarwanda, Marathi, Moroccan Arabic, Modern Standard Arabic, Spanish, and Telugu. These languages originate from five distinct language families and are predominantly spoken in Africa and Asia – regions characterised by a relatively limited availability of NLP resources. Each instance in the SemRel datasets is a sentence pair associated with a score that represents the degree of semantic textual relatedness between the two sentences. The scores are obtained using a comparative annotation framework. We describe the data collection and annotation processes, challenges when building the datasets, baseline experiments, and their impact and utility in NLP.

Item Type: Conference or Workshop Item (Paper)
Date Type: Publication
Status: Published
Schools: Schools > Computer Science & Informatics
Publisher: Association for Computational Linguistics
ISBN: 9798331301828
Related URLs:
Last Modified: 05 Jun 2025 14:15
URI: https://orca.cardiff.ac.uk/id/eprint/178472

Actions (repository staff only)

Edit Item Edit Item