Song, Ran, Liu, Yonghuai and Rosin, Paul ORCID: https://orcid.org/0000-0002-4965-3884
2021.
Mesh saliency via weakly supervised classification-for-saliency CNN.
IEEE Transactions on Visualization and Computer Graphics
27
(1)
, pp. 151-164.
10.1109/TVCG.2019.2928794
|
Preview |
PDF
- Accepted Post-Print Version
Download (24MB) | Preview |
Abstract
Recently, effort has been made to apply deep learning to the detection of mesh saliency. However, one major barrier is to collect a large amount of vertex-level annotation as saliency ground truth for training the neural networks. Quite a few pilot studies showed that this task is quite difficult. In this work, we solve this problem by developing a novel network trained in a weakly supervised manner. The training is end-to-end and does not require any saliency ground truth but only the class membership of meshes. Our Classification-for-Saliency CNN (CfS-CNN) employs a multi-view setup and contains a newly designed two-channel structure which integrates view-based features of both classification and saliency. It essentially transfers knowledge from 3D object classification to mesh saliency. Our approach significantly outperforms the existing state-of-the-art methods according to extensive experimental results. Also, the CfS-CNN can be directly used for scene saliency. We showcase two novel applications based on scene saliency to demonstrate its utility.
| Item Type: | Article |
|---|---|
| Date Type: | Publication |
| Status: | Published |
| Schools: | Schools > Computer Science & Informatics |
| Publisher: | Institute of Electrical and Electronics Engineers (IEEE) |
| ISSN: | 1077-2626 |
| Date of First Compliant Deposit: | 15 August 2019 |
| Date of Acceptance: | 10 July 2019 |
| Last Modified: | 18 Nov 2024 07:00 |
| URI: | https://orca.cardiff.ac.uk/id/eprint/124144 |
Citation Data
Cited 6 times in Scopus. View in Scopus. Powered By Scopus® Data
Actions (repository staff only)
![]() |
Edit Item |





Altmetric
Altmetric