Tomsett, Richard, Braines, David, Harborne, Daniel, Preece, Alun David ORCID: https://orcid.org/0000-0003-0349-9057 and Chakraborty, Supriyo 2018. Interpretable to whom? A role-based model for analyzing interpretable machine learning systems. Presented at: 3rd Annual Workshop on Human Interpretability in Machine Learning (WHI 2018), Stockholm, Sweden, 14 July 2018. |
Preview |
PDF
- Accepted Post-Print Version
Download (95kB) | Preview |
Abstract
Several researchers have argued that a machine learning system’s interpretability should be defined in relation to a specific agent or task: we should not ask if the system is interpretable, but to whom is it interpretable. We describe a model intended to help answer this question, by identifying different roles that agents can fulfill in relation to the machine learning system. We illustrate the use of our model in a variety of scenarios, exploring how an agent’s role influences its goals, and the implications for defining interpretability. Finally, we make suggestions for how our model could be useful to interpretability researchers, system developers, and regulatory bodies auditing machine learning systems.
Item Type: | Conference or Workshop Item (Paper) |
---|---|
Date Type: | Completion |
Status: | Unpublished |
Schools: | Computer Science & Informatics Crime and Security Research Institute (CSURI) |
Related URLs: | |
Date of First Compliant Deposit: | 19 June 2018 |
Last Modified: | 23 Oct 2022 14:02 |
URI: | https://orca.cardiff.ac.uk/id/eprint/112597 |
Actions (repository staff only)
Edit Item |