Ariyani, Nurul, Bouraoui, Zied, Booth, Richard ORCID: https://orcid.org/0000-0002-6647-6381 and Schockaert, Steven ORCID: https://orcid.org/0000-0002-9256-2881 2024. Can language models learn embeddings of propositional logic assertions? Presented at: The 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING), Turin, Italy, 20-25 May 2024. |
Preview |
PDF
- Accepted Post-Print Version
Download (1MB) | Preview |
Abstract
Natural language offers an appealing alternative to formal logics as a vehicle for representing knowledge. However, using natural language means that standard methods for automated reasoning can no longer be used. A popular solution is to use transformer-based language models (LMs) to directly reason about knowledge expressed in natural language, but this has two important limitations. First, the set of premises is often too large to be directly processed by the LM. This means that we need a retrieval strategy which can select the most relevant premises when trying to infer some conclusion. Second, LMs have been found to learn shortcuts and thus lack robustness, putting in doubt to what extent they actually understand the knowledge that is expressed. Given these limitations, we explore the following alternative: rather than using LMs to perform reasoning directly, we use them to learn embeddings of individual assertions. Reasoning is then carried out by manipulating the learned embeddings. We show that this strategy is feasible to some extent while at the same time also highlighting the limitations of directly fine-tuning LMs to learn the required embeddings.
Item Type: | Conference or Workshop Item (Paper) |
---|---|
Status: | In Press |
Schools: | Computer Science & Informatics |
Date of First Compliant Deposit: | 8 May 2024 |
Date of Acceptance: | 20 February 2024 |
Last Modified: | 11 Jun 2024 09:26 |
URI: | https://orca.cardiff.ac.uk/id/eprint/168186 |
Actions (repository staff only)
Edit Item |