Zhang, Yun, Lai, Yukun ORCID: https://orcid.org/0000-0002-2094-5680, Lang, Nie, Zhang, Fang-Lue and Xu, Lin 2024. RecStitchNet: Learning to stitch images with rectangular boundaries. Computational Visual Media 10 , pp. 687-703. 10.1007/s41095-024-0420-6 |
Preview |
PDF
- Published Version
Available under License Creative Commons Attribution. Download (13MB) | Preview |
Abstract
Irregular boundaries in image stitching naturally occur due to freely moving cameras. To deal with this problem, existing methods focus on optimizing mesh warping to make boundaries regular using the traditional explicit solution. However, previous methods always depend on hand-crafted features (e.g., keypoints and line segments). Thus, failures often happen in overlapping regions without distinctive features. In this paper, we address this problem by proposing RecStitchNet, a reasonable and effective network for image stitching with rectangular boundaries. Considering that both stitching and imposing rectangularity are non-trivial tasks in the learning-based framework, we propose a three-step progressive learning based strategy, which not only simplifies this task, but gradually achieves a good balance between stitching and imposing rectangularity. In the first step, we perform initial stitching by a pre-trained state-of-the-art image stitching model, to produce initially warped stitching results without considering the boundary constraint. Then, we use a regression network with a comprehensive objective regarding mesh, perception, and shape to further encourage the stitched meshes to have rectangular boundaries with high content fidelity. Finally, we propose an unsupervised instance-wise optimization strategy to refine the stitched meshes iteratively, which can effectively improve the stitching results in terms of feature alignment, as well as boundary and structure preservation. Due to the lack of stitching datasets and the difficulty of label generation, we propose to generate a stitching dataset with rectangular stitched images as pseudo-ground-truth labels, and the performance upper bound induced from the it can be broken by our unsupervised refinement. Qualitative and quantitative results and evaluations demonstrate the advantages of our method over the state-of-the-art.
Item Type: | Article |
---|---|
Date Type: | Publication |
Status: | Published |
Schools: | Computer Science & Informatics |
Publisher: | SpringerOpen |
ISSN: | 2096-0433 |
Date of First Compliant Deposit: | 22 March 2024 |
Date of Acceptance: | 27 February 2024 |
Last Modified: | 02 Oct 2024 14:14 |
URI: | https://orca.cardiff.ac.uk/id/eprint/167469 |
Actions (repository staff only)
Edit Item |