Gajbhiye, Amit, Bouraoui, Zied, Li, Na, Chatterjee, Usashi, Espinosa-Anke, Luis ![]() ![]() ![]() |
Preview |
PDF
- Accepted Post-Print Version
Download (217kB) | Preview |
Abstract
Concepts play a central role in many applications. This includes settings where concepts have to be modelled in the absence of sentence context. Previous work has therefore focused on distilling decontextualised concept embeddings from language models. But concepts can be modelled from different perspectives, whereas concept embeddings typically mostly capture taxonomic structure. To address this issue, we propose a strategy for identifying what different concepts, from a potentially large concept vocabulary, have in common with others. We then represent concepts in terms of the properties they share with the other concepts. To demonstrate the practical usefulness of this way of modelling concepts, we consider the task of ultra-fine entity typing, which is a challenging multi-label classification problem. We show that by augmenting the label set with shared properties, we can improve the performance of the state-of-the-art models for this task.
Item Type: | Conference or Workshop Item (Paper) |
---|---|
Status: | Published |
Schools: | Advanced Research Computing @ Cardiff (ARCCA) Computer Science & Informatics |
Publisher: | Association for Computational Linguistics |
ISBN: | 979-8-89176-060-8 |
Date of First Compliant Deposit: | 13 February 2024 |
Date of Acceptance: | 7 October 2023 |
Last Modified: | 10 Jun 2024 09:04 |
URI: | https://orca.cardiff.ac.uk/id/eprint/165643 |
Actions (repository staff only)
![]() |
Edit Item |