Cardiff University | Prifysgol Caerdydd ORCA
Online Research @ Cardiff 
WelshClear Cookie - decide language by browser settings

PerCul: A story-driven cultural evaluation of LLMs in Persian

Moosavi Monazzah, Erfan, Rahimzadeh, Vahid, Yaghoobzadeh, Yadollah, Shakery, Azadeh and Pilehvar, Mohammad Taher 2025. PerCul: A story-driven cultural evaluation of LLMs in Persian. Presented at: Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), Albuquerque, New Mexico, USA, 29 April - 4 May 2025. Published in: Chiruzzo, Luis, Ritter, Alan and Wang, Lu eds. Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers). Albuquerque, New Mexico: Association for Computational Linguistics, pp. 12670-12687. 10.18653/v1/2025.naacl-long.631

[thumbnail of 2025.naacl-long.631.pdf] PDF - Published Version
Available under License Creative Commons Attribution.

Download (11MB)

Abstract

Large language models predominantly reflect Western cultures, largely due to the dominance of English-centric training data. This imbalance presents a significant challenge, as LLMs are increasingly used across diverse contexts without adequate evaluation of their cultural competence in non-English languages, including Persian. To address this gap, we introduce PerCul, a carefully constructed dataset designed to assess the sensitivity of LLMs toward Persian culture. PerCul features story-based, multiple-choice questions that capture culturally nuanced scenarios.Unlike existing benchmarks, PerCul is curated with input from native Persian annotators to ensure authenticity and to prevent the use of translation as a shortcut. We evaluate several state-of-the-art multilingual and Persian-specific LLMs, establishing a foundation for future research in cross-cultural NLP evaluation. Our experiments demonstrate a 11.3% gap between best closed source model and layperson baseline while the gap increases to 21.3% by using the best open-weight model. You can access the dataset from here:https://huggingface.co/datasets/teias-ai/percul

Item Type: Conference or Workshop Item - published (Paper)
Date Type: Publication
Status: Published
Schools: Schools > Computer Science & Informatics
Publisher: Association for Computational Linguistics
ISBN: 979-8-89176-189-6
Date of First Compliant Deposit: 27 January 2026
Last Modified: 27 Jan 2026 10:30
URI: https://orca.cardiff.ac.uk/id/eprint/184221

Actions (repository staff only)

Edit Item Edit Item

Downloads

Downloads per month over past year

View more statistics