Liu, Fang, Zou, Changqing, Deng, Xiaoming, Zuo, Ran, Lai, Yu-Kun ![]() ![]() |
Preview |
PDF
- Accepted Post-Print Version
Download (19MB) | Preview |
Abstract
Sketch-based image retrieval (SBIR) has been a popular research topic in recent years. Existing works concentrate on mapping the visual information of sketches and images to a semantic space at the object level. In this paper, for the first time, we study the fine-grained scene-level SBIR problem which aims at retrieving scene images satisfying the user’s specific requirements via a freehand scene sketch. We propose a graph embedding based method to learn the similarity measurement between images and scene sketches, which models the multi-modal information, including the size and appearance of objects as well as their layout information, in an effective manner. To evaluate our approach, we collect a dataset based on SketchyCOCO and extend the dataset using Coco-stuff. Comprehensive experiments demonstrate the significant potential of the proposed approach on the application of fine-grained scene-level image retrieval.
Item Type: | Conference or Workshop Item (Paper) |
---|---|
Date Type: | Publication |
Status: | Published |
Schools: | Schools > Computer Science & Informatics |
Publisher: | Springer |
ISBN: | 9783030585280 |
Funders: | The Royal Society |
Date of First Compliant Deposit: | 17 July 2020 |
Date of Acceptance: | 2 July 2020 |
Last Modified: | 24 Sep 2025 10:30 |
URI: | https://orca.cardiff.ac.uk/id/eprint/133561 |
Citation Data
Cited 13 times in Scopus. View in Scopus. Powered By Scopus® Data
Actions (repository staff only)
![]() |
Edit Item |