| Grange, Jacques A.  ORCID: https://orcid.org/0000-0001-5197-249X, Princis, Henrijs, Kozlowski, Theodor R. W., Amadou-Dioffo, Aissa, Wu, Jing  ORCID: https://orcid.org/0000-0001-5123-9861, Hicks, Yulia A.  ORCID: https://orcid.org/0000-0002-7179-4587 and Johansen, Mark K.  ORCID: https://orcid.org/0000-0001-6429-1976
      2022.
      
      XAI & I: Self-explanatory AI facilitating mutual understanding between AI and human experts.
      Presented at: 26th International Conference on Knowledge-Based and Intelligent Information & Engineering  Systems (KES 2022),
      
      7-9 September 2022.
      
      
      Procedia Computer Science.
      
      
       
      
      
      Elsevier,
      
      10.1016/j.procs.2022.09.419   | 
| ![Grange. Self-explanatory AI facilitating mutual understanding.pdf [thumbnail of Grange. Self-explanatory AI facilitating mutual understanding.pdf]](https://orca.cardiff.ac.uk/style/images/fileicons/application_pdf.png) | PDF
 - Published Version Available under License Creative Commons Attribution Non-commercial No Derivatives. Download (1MB) | 
Abstract
Traditionally, explainable artificial intelligence seeks to provide explanation and interpretability of high-performing black-box models such as deep neural networks. Interpretation of such models remains difficult, because of their high complexity. An alternative method is to instead force a deep-neural network to use human-intelligible features as the basis for its decisions. We tested this approach using the natural category domain of rock types. We compared the performance of a black-box implementation of transfer-learning using Resnet50 to that of a network first trained to predict expert-identified features and then forced to use these features to categorise rock images. The performance of this feature-constrained network was virtually identical to that of the unconstrained network. Further, a partially constrained network forced to condense down to a small number of features that was not trained with expert features did not result in these abstracted features being intelligible; nevertheless, an affine transformation of these features could be found that aligned well with expert-intelligible features. These findings show that making an AI intrinsically intelligible need not be at the cost of performance.
| Item Type: | Conference or Workshop Item (Paper) | 
|---|---|
| Date Type: | Published Online | 
| Status: | Published | 
| Schools: | Schools > Psychology Schools > Computer Science & Informatics | 
| Publisher: | Elsevier | 
| ISSN: | 1877-0509 | 
| Date of First Compliant Deposit: | 18 October 2022 | 
| Date of Acceptance: | 30 September 2022 | 
| Last Modified: | 26 Jan 2023 22:27 | 
| URI: | https://orca.cardiff.ac.uk/id/eprint/153482 | 
Actions (repository staff only)
|  | Edit Item | 

 
							

 Altmetric
 Altmetric Altmetric
 Altmetric