Jiang, Zhongyu, Yue, Huanjing, Lai, Yu-Kun ORCID: https://orcid.org/0000-0002-2094-5680, Yang, Jingyu, Hou, Yonghong and Hou, Chunping 2021. Deep edge map guided depth super resolution. Signal Processing: Image Communication 90 , 116040. 10.1016/j.image.2020.116040 |
Preview |
PDF
- Accepted Post-Print Version
Available under License Creative Commons Attribution Non-commercial No Derivatives. Download (16MB) | Preview |
Abstract
Accurate edge reconstruction is critical for depth map super resolution (SR). Therefore, many traditional SR methods utilize edge maps to guide depth SR. However, it is difficult to predict accurate edge maps from low resolution (LR) depth maps. In this paper, we propose a deep edge map guided depth SR method, which includes an edge prediction subnetwork and an SR subnetwork. The edge prediction subnetwork takes advantage of the hierarchical representation of color and depth images to produce accurate edge maps, which promote the performance of SR subnetwork. The SR subnetwork is a disentangling cascaded network to progressively upsample SR result, where every level is made up of a weight sharing module and an adaptive module. The weight sharing module extracts the general features in different levels, while the adaptive module transfers the general features to the specific features to adapt to different degraded inputs. Quantitative and qualitative evaluations on various datasets with different magnification factors demonstrate the effectiveness and promising performance of the proposed method. In addition, we construct a benchmark dataset captured by Kinect-v2 to facilitate research on real-world depth map SR.
Item Type: | Article |
---|---|
Date Type: | Publication |
Status: | Published |
Schools: | Computer Science & Informatics |
Publisher: | Elsevier |
ISSN: | 0923-5965 |
Date of First Compliant Deposit: | 4 November 2020 |
Date of Acceptance: | 13 October 2020 |
Last Modified: | 29 Nov 2024 09:45 |
URI: | https://orca.cardiff.ac.uk/id/eprint/136130 |
Citation Data
Cited 7 times in Scopus. View in Scopus. Powered By Scopus® Data
Actions (repository staff only)
Edit Item |