Lou, Jianxun, Lin, Hanhe, Marshall, David, Saupe, Dietmar and Liu, Hantao ORCID: https://orcid.org/0000-0003-4544-3481 2022. TranSalNet: Towards perceptually relevant visual saliency prediction. Neurocomputing 495 , pp. 455-467. 10.1016/j.neucom.2022.04.080 |
Preview |
PDF
- Published Version
Available under License Creative Commons Attribution. Download (2MB) | Preview |
Abstract
Convolutional neural networks (CNNs) have significantly advanced computational modelling for saliency prediction. However, accurately simulating the mechanisms of visual attention in the human cortex remains an academic challenge. It is critical to integrate properties of human vision into the design of CNN architectures, leading to perceptually more relevant saliency prediction. Due to the inherent inductive biases of CNN architectures, there is a lack of sufficient long-range contextual encoding capacity. This hinders CNN-based saliency models from capturing properties that emulate viewing behaviour of humans. Transformers have shown great potential in encoding long-range information by leveraging the self-attention mechanism. In this paper, we propose a novel saliency model that integrates transformer components to CNNs to capture the long-range contextual visual information. Experimental results show that the transformers provide added value to saliency prediction, enhancing its perceptual relevance in the performance. Our proposed saliency model using transformers has achieved superior results on public benchmarks and competitions for saliency prediction models.
Item Type: | Article |
---|---|
Date Type: | Publication |
Status: | Published |
Schools: | Computer Science & Informatics |
Additional Information: | This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/) |
Publisher: | Elsevier |
ISSN: | 0925-2312 |
Date of First Compliant Deposit: | 26 April 2022 |
Date of Acceptance: | 17 April 2022 |
Last Modified: | 04 May 2023 10:55 |
URI: | https://orca.cardiff.ac.uk/id/eprint/149390 |
Citation Data
Cited 1 time in Scopus. View in Scopus. Powered By Scopus® Data
Actions (repository staff only)
Edit Item |