Zhou, Yi ORCID: https://orcid.org/0000-0001-7009-8515, Camacho Collados, Jose ORCID: https://orcid.org/0000-0003-1618-7239 and Bollegala, Danushka 2023. A predictive factor analysis of social biases and task-performance in pretrained masked language models. Presented at: 2023 Conference on Empirical Methods in Natural Language Processing, Singapore, 6-10 December 2023. Published in: Bouamor, Houda, Pino, Juan and Bali, Kalika eds. Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, 11082–11100. 10.18653/v1/2023.emnlp-main.683 |
PDF
- Published Version
Available under License Creative Commons Attribution. Download (333kB) |
Abstract
Various types of social biases have been reported with pretrained Masked Language Models (MLMs) in prior work. However, multiple underlying factors are associated with an MLM such as its model size, size of the training data, training objectives, the domain from which pretraining data is sampled, tokenization, and languages present in the pretrained corpora, to name a few. It remains unclear as to which of those factors influence social biases that are learned by MLMs. To study the relationship between model factors and the social biases learned by an MLM, as well as the downstream task performance of the model, we conduct a comprehensive study over 39 pretrained MLMs covering different model sizes, training objectives, tokenization methods, training data domains and languages. Our results shed light on important factors often neglected in prior literature, such as tokenization or model objectives.
Item Type: | Conference or Workshop Item (Paper) |
---|---|
Date Type: | Publication |
Status: | Published |
Schools: | Computer Science & Informatics |
Publisher: | Association for Computational Linguistics |
Date of First Compliant Deposit: | 12 June 2024 |
Last Modified: | 08 Jul 2024 01:30 |
URI: | https://orca.cardiff.ac.uk/id/eprint/168940 |
Actions (repository staff only)
Edit Item |