Fisher, Sarah A.
2024.
Large language models and their big bullshit potential.
Ethics and Information Technology
26
, 67.
10.1007/s10676-024-09802-5
![]() |
Preview |
PDF
- Published Version
Available under License Creative Commons Attribution. Download (789kB) | Preview |
Abstract
Newly powerful large language models have burst onto the scene, with applications across a wide range of functions. We can now expect to encounter their outputs at rapidly increasing volumes and frequencies. Some commentators claim that large language models are bullshitting, generating convincing output without regard for the truth. If correct, that would make large language models distinctively dangerous discourse participants. Bullshitters not only undermine the norm of truthfulness (by saying false things) but the normative status of truth itself (by treating it as entirely irrelevant). So, do large language models really bullshit? I argue that they can, in the sense of issuing propositional content in response to fact-seeking prompts, without having first assessed that content for truth or falsity. However, I further argue that they need not bullshit, given appropriate guardrails. So, just as with human speakers, the propensity for a large language model to bullshit depends on its own particular make-up.
Item Type: | Article |
---|---|
Date Type: | Published Online |
Status: | Published |
Schools: | English, Communication and Philosophy |
Publisher: | Springer |
ISSN: | 1572-8439 |
Date of First Compliant Deposit: | 17 September 2024 |
Date of Acceptance: | 17 September 2024 |
Last Modified: | 24 Oct 2024 10:23 |
URI: | https://orca.cardiff.ac.uk/id/eprint/172197 |
Actions (repository staff only)
![]() |
Edit Item |