Perez Almendros, Carla ![]() ![]() ![]() |
![]() |
PDF
- Published Version
Available under License Creative Commons Attribution. Download (369kB) |
Abstract
Gender bias has been widely studied by the NLP community. However, other more subtle variations of it, such as mansplaining, have yet received little attention. Mansplaining is a discriminatory behaviour that consists of a condescending treatment or discourse towards women. In this paper, we introduce and analyze Well, actually..., a corpus of 886 mansplaining stories experienced by women. We analyze the corpus in terms of features such as offensiveness, sentiment or misogyny, among others. We also explore to what extent Large Language Models (LLMs) can understand and identify mansplaining and other gender-related microaggressions. Specifically, we experiment with ChatGPT-3.5-Turbo and LLaMA-2 (13b and 70b), with both targeted and open questions. Our findings suggest that, although they can identify mansplaining to some extent, LLMs still struggle to point out this attitude and will even reproduce some of the social patterns behind mansplaining situations, for instance by praising men for giving unsolicited advice to women.
Item Type: | Conference or Workshop Item (Paper) |
---|---|
Status: | Published |
Schools: | Schools > Computer Science & Informatics |
Publisher: | ELRA and ICCL |
ISBN: | 9782493814104 |
Date of First Compliant Deposit: | 20 February 2025 |
Last Modified: | 20 Feb 2025 09:45 |
URI: | https://orca.cardiff.ac.uk/id/eprint/175713 |
Actions (repository staff only)
![]() |
Edit Item |