Olivares, Daniel Guzman, Quijano, Lara and Liberatore, Federico ![]() ![]() |
Preview |
PDF
- Published Version
Download (7MB) | Preview |
Abstract
The rise of generative chat-based Large Language Models (LLMs) over the past two years has spurred a race to develop systems that promise near-human conversational and reasoning experiences. However, recent studies indicate that the language understanding offered by these models remains limited and far from human-like performance, particularly in grasping the contextual meanings of words—an essential aspect of reasoning. In this paper, we present a simple yet computationally efficient framework for multilingual Word Sense Disambiguation (WSD). Our approach reframes the WSD task as a cluster discrimination analysis over a semantic network refined from BabelNet using group algebra. We validate our methodology across multiple WSD benchmarks, achieving a new state of the art for all languages and tasks, as well as in individual assessments by part of speech. Notably, our model significantly surpasses the performance of current alternatives, even in low-resource languages, while reducing the parameter count by 72%.
Item Type: | Conference or Workshop Item (Paper) |
---|---|
Date Type: | Published Online |
Status: | Published |
Schools: | Schools > Computer Science & Informatics |
Publisher: | ACL |
ISBN: | 979-8-89176-189-6 |
Date of First Compliant Deposit: | 7 May 2025 |
Date of Acceptance: | 23 January 2025 |
Last Modified: | 13 May 2025 09:50 |
URI: | https://orca.cardiff.ac.uk/id/eprint/178131 |
Actions (repository staff only)
![]() |
Edit Item |