Davies, Cai, Roig Vilamala, Marc and Preece, Alun ORCID: https://orcid.org/0000-0003-0349-9057 2023. Knowledge from uncertainty in evidential deep learning. [Online]. Available at: https://charliezhaoyinpeng.github.io/UDM-KDD23/ap/ |
Preview |
PDF
- Accepted Post-Print Version
Download (925kB) | Preview |
Abstract
This work reveals an {\em evidential signal} that emerges from the uncertainty value in Evidential Deep Learning (EDL). EDL is one example of a class of uncertainty-aware deep learning approaches designed to provide confidence (or epistemic uncertainty) about the current test sample. In particular for computer vision and bidirectional encoder large language models, the `evidential signal' arising from the Dirichlet strength in EDL can, in some cases, discriminate between classes, which is particularly strong when using large language models. We hypothesise that the KL regularisation term causes EDL to couple aleatoric and epistemic uncertainty. In this paper, we empirically investigate the correlations between misclassification and evaluated uncertainty, and show that EDL's `evidential signal' is due to misclassification bias. We critically evaluate EDL with other Dirichlet-based approaches, namely Generative Evidential Neural Networks (EDL-GEN) and Prior Networks, and show theoretically and empirically the differences between these loss functions. We conclude that EDL's coupling of uncertainty arises from these differences due to the use (or lack) of out-of-distribution samples during training.
Item Type: | Website Content |
---|---|
Status: | Published |
Schools: | Computer Science & Informatics |
Date of First Compliant Deposit: | 11 September 2024 |
Date of Acceptance: | 23 June 2023 |
Last Modified: | 17 Sep 2024 01:29 |
URI: | https://orca.cardiff.ac.uk/id/eprint/172042 |
Actions (repository staff only)
Edit Item |