Cardiff University | Prifysgol Caerdydd ORCA
Online Research @ Cardiff 
WelshClear Cookie - decide language by browser settings

Multimodal explanations for AI-based multisensor fusion

Braines, Dave, Preece, Alun ORCID: https://orcid.org/0000-0003-0349-9057 and Harborne, Dan 2018. Multimodal explanations for AI-based multisensor fusion. Presented at: NATO SET-262 RSM on artificial intelligence for military multisensor fusion engines, Budapest, 05-06 November 2018.

Full text not available from this repository. (Request a copy)

Abstract

The recent resurgence in the effectiveness of artificial intelligence (AI) and machine learning (ML) techniques for image, text and signal processing has come with a growing recognition that these techniques are “inscrutable”: they can be hard for users to trust because they lack effective means of generating explanations for their outputs. Consequently, there is currently a great deal of research and development addressing this problem, producing a sizeable number of proposed explanation techniques for AI/ML approaches operating on a variety of data modalities. However, a problem that has received less attention is: what modality of explanation to choose for a particular user and task? For example, many techniques attempt to produce visualizations of the workings of an ML model, e.g., so-called “saliency maps” for a deep neural network, but there may be multiple reasons why this mode of explanation might not be appropriate for a user, including: (i) they may be operating at the edge of the network with a device that is not suited to receiving or displaying such a visualization; (ii) it may not be appropriate for security reasons to send them a visualization derived from the source imagery (e.g., if the location of the camera system is sensitive); (iii) this kind of explanation may be “too low level” for that user’s needs – they may require something more “causal”, for example. One approach that may address all three of these example issues would be to map the explanation from a visualization to a textual rationalization. In this paper we explore this issue of generating explanations in a range of modalities in the context of AI/ML services that operate on multisensor data and show that a “grammar-based” approach that separates atomic explanation-generation and communication actions offers sufficient scope and flexibility to address a set of mission scenarios.

Item Type: Conference or Workshop Item (Paper)
Date Type: Publication
Status: Published
Schools: Computer Science & Informatics
Subjects: Q Science > QA Mathematics > QA75 Electronic computers. Computer science
Uncontrolled Keywords: AI, explanation, human-computer interaction
Funders: Dstl, Army Research Lab
Date of Acceptance: 5 November 2018
Last Modified: 24 Oct 2022 08:02
URI: https://orca.cardiff.ac.uk/id/eprint/116675

Actions (repository staff only)

Edit Item Edit Item