Zhao, Huihuang, Zheng, Jinghua, Wang, Yaonan, Yuan, Xiaofang and Li, Yuhua ORCID: https://orcid.org/0000-0003-2913-4478 2020. Portrait style transfer using deep convolutional neural networks and facial segmentation. Computers and Electrical Engineering 85 , 106655. 10.1016/j.compeleceng.2020.106655 |
Preview |
PDF
- Accepted Post-Print Version
Available under License Creative Commons Attribution Non-commercial No Derivatives. Download (19MB) | Preview |
Abstract
When standard neural style transfer approaches are used in portrait style transfer, they often inappropriately apply textures and colours in different regions of the style portraits to the content portraits, leading to unsatisfied transfer results. This paper presents a portrait style transfer method to transfer the style of one image to another. It first proposes a combined segmentation method for the portrait parts, which segments both the style portrait and the content portrait into masks of seven parts automatically, including background, face, eyes, nose, eyebrows, mouth and foreground. These masks are extracted to capture elements of the styles for objects in the style image and to preserve the structure in the content portrait. This paper then proposes an augmented deep Convolutional Neural Network (CNN) framework for portrait style transfer. The masks of seven parts are added into a trained deep convolutional neural network as feature maps in certain selected layers in the augmented deep CNN model. An improved loss function is proposed for the training of the portrait style transfer. Results on various images show that our method outperforms the state-of-the-art style transfer techniques.
Item Type: | Article |
---|---|
Date Type: | Publication |
Status: | Published |
Schools: | Computer Science & Informatics |
Publisher: | Elsevier |
ISSN: | 0045-7906 |
Date of First Compliant Deposit: | 5 January 2021 |
Date of Acceptance: | 2 April 2020 |
Last Modified: | 27 Nov 2024 16:00 |
URI: | https://orca.cardiff.ac.uk/id/eprint/137180 |
Citation Data
Cited 10 times in Scopus. View in Scopus. Powered By Scopus® Data
Actions (repository staff only)
Edit Item |