Zhao, Hui-Huang, Rosin, Paul L. ORCID: https://orcid.org/0000-0002-4965-3884, Lai, YuKun ORCID: https://orcid.org/0000-0002-2094-5680 and Wang, Yao-Nan 2020. Automatic semantic style transfer using deep convolutional neural networks and soft masks. Visual Computer 36 , pp. 1307-1324. 10.1007/s00371-019-01726-2 |
Preview |
PDF
- Accepted Post-Print Version
Download (6MB) | Preview |
Abstract
This paper presents an automatic image synthesis method to transfer the style of an example image to a content image. When standard neural style transfer approaches are used, the textures and colours in different semantic regions of the style image are often applied inappropriately to the content image, ignoring its semantic layout and ruining the transfer result. In order to reduce or avoid such effects, we propose a novel method based on automatically segmenting the objects and extracting their soft semantic masks from the style and content images, in order to preserve the structure of the content image while having the style transferred. Each soft mask of the style image represents a specific part of the style image, corresponding to the soft mask of the content image with the same semantics. Both the soft masks and source images are provided as multichannel input to an augmented deep CNN framework for style transfer which incorporates a generative Markov random field model. The results on various images show that our method outperforms the most recent techniques.
Item Type: | Article |
---|---|
Date Type: | Publication |
Status: | Published |
Schools: | Computer Science & Informatics |
Publisher: | Springer Verlag |
ISSN: | 0178-2789 |
Date of First Compliant Deposit: | 10 July 2019 |
Date of Acceptance: | 9 July 2019 |
Last Modified: | 25 Nov 2024 07:00 |
URI: | https://orca.cardiff.ac.uk/id/eprint/124145 |
Citation Data
Cited 23 times in Scopus. View in Scopus. Powered By Scopus® Data
Actions (repository staff only)
Edit Item |