Edwards, Aleksandra, Camacho-Collados, Jose ![]() ![]() ![]() |
![]() |
PDF
- Published Version
Available under License Creative Commons Attribution. Download (725kB) |
Abstract
Pre-trained language models provide the foundations for state-of-the-art performance across a wide range of natural language processing tasks, including text classification. However, most classification datasets assume a large amount labeled data, which is commonly not the case in practical settings. In particular, in this paper we compare the performance of a light-weight linear classifier based on word embeddings, i.e., fastText (Joulin et al., 2017), versus a pre-trained language model, i.e., BERT (Devlin et al., 2019), across a wide range of datasets and classification tasks. In general, results show the importance of domain-specific unlabeled data, both in the form of word embeddings or language models. As for the comparison, BERT outperforms all baselines in standard datasets with large training sets. However, in settings with small training datasets a simple method like fastText coupled with domain-specific word embeddings performs equally well or better than BERT, even when pre-trained on domain-specific data.
Item Type: | Conference or Workshop Item (Paper) |
---|---|
Date Type: | Publication |
Status: | Published |
Schools: | Schools > Computer Science & Informatics |
Publisher: | International Committee on Computational Linguistics |
ISBN: | 978-1-952148-27-9 |
Date of First Compliant Deposit: | 25 September 2025 |
Last Modified: | 25 Sep 2025 13:30 |
URI: | https://orca.cardiff.ac.uk/id/eprint/181321 |
Actions (repository staff only)
![]() |
Edit Item |