Kumar, Nitesh ![]() ![]() ![]() |
Preview |
PDF
- Published Version
Download (449kB) | Preview |
Abstract
Conceptual spaces represent entities in terms of their primitive semantic features. Such representations are highly valuable but they are notoriously difficult to learn, especially when it comes to modelling perceptual and subjective features. Distilling conceptual spaces from Large Language Models (LLMs) has recently emerged as a promising strategy, but existing work has been limited to probing pre-trained LLMs using relatively simple zero-shot strategies. We focus in particular on the task of ranking entities according to a given conceptual space dimension. Unfortunately, we cannot directly fine-tune LLMs on this task, because ground truth rankings for conceptual space dimensions are rare. We therefore use more readily available features as training data and analyse whether the ranking capabilities of the resulting models transfer to perceptual and subjective features. We find that this is indeed the case, to some extent, but having at least some perceptual and subjective features in the training data seems essential for achieving the best results.
Item Type: | Conference or Workshop Item (Paper) |
---|---|
Date Type: | Publication |
Status: | Published |
Schools: | Professional Services > Advanced Research Computing @ Cardiff (ARCCA) Schools > Computer Science & Informatics |
Publisher: | Association for Computational Linguistics |
Date of First Compliant Deposit: | 30 July 2024 |
Date of Acceptance: | 16 May 2024 |
Last Modified: | 28 Feb 2025 16:51 |
URI: | https://orca.cardiff.ac.uk/id/eprint/170092 |
Actions (repository staff only)
![]() |
Edit Item |