Khalid, Muhammad, Nourollah, Amir Masoud and Schockaert, Steven ![]() Item availability restricted. |
![]() |
PDF
- Accepted Post-Print Version
Restricted to Repository staff only until 10 August 2025 due to copyright restrictions. Download (960kB) |
Abstract
Large Language Models (LLMs) and Systematic Reasoning Large Language Models (LLMs) have been found to struggle with systematic reasoning. Even on tasks where they appear to perform well, their performance often depends on shortcuts rather than genuine reasoning abilities, leading them to collapse on out-of-distribution (OOD) examples. Post-training strategies based on reinforcement learning and chain-of-thought prompting have recently been hailed as a step change. However, little is known about the potential of the resulting “Large Reasoning Models” (LRMs) beyond maths and programming-based problem solving, where genuine OOD problems can be sparse. In this paper, we focus on tasks that require systematic relational composition for qualitative spatial and temporal reasoning. The setting allows fine control over problem difficulty to precisely measure OOD generalization. We find that zero-shot LRMs generally outperform their LLM counterparts in single-path reasoning tasks but struggle in the multi-path setting. While showing comparatively better results, fine-tuned LLMs are also not capable of multi-path generalization. We also provide evidence for the behavioral interpretation of this—namely, that LRMs are shallow disjunctive reasoners.
Item Type: | Conference or Workshop Item (Paper) |
---|---|
Status: | In Press |
Schools: | Schools > Computer Science & Informatics |
Related URLs: | |
Date of First Compliant Deposit: | 22 July 2025 |
Date of Acceptance: | 15 May 2025 |
Last Modified: | 22 Jul 2025 14:45 |
URI: | https://orca.cardiff.ac.uk/id/eprint/179567 |
Actions (repository staff only)
![]() |
Edit Item |