Journal of Software:2019.30(11):3549-3566

(中国科学院 自动化研究所, 北京 100190;中国科学院大学, 北京 100049)
Fast Multi-way Regional Makeup Transfer Deep Network
HUANG Yan,HE Ze-Wen,ZHANG Wen-Sheng
(Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China;University of Chinese Academy of Sciences, Beijing 100049, China)
Chart / table
Similar Articles
Article :Browse 27   Download 31
Received:August 22, 2018    Revised:January 17, 2019
> 中文摘要: 妆容迁移是指把参考妆容迁移到素颜人脸上,并保持其上妆风格的一种任务.它提供了快速高效的候选妆容可视化的解决方案,得到了学术界和工业界的广泛关注.为了解决真实同人异妆数据的缺失,以及现有妆容迁移方法没有充分考虑人与人的五官差异而导致的迁移脸部结构丢失等问题,提出了一种多通路的分区域快速妆容迁移深度网络模型.具体而言,首先在人脸关键点检测的基础上,完成端到端的人脸校准;再利用通路差异的损失函数,根据眼影、唇膏、粉底的区域妆容特点优化网络;最后通过泊松融合、多通路的输出生成上妆结果.该模型具有存储空间小、生成速度快的优点,在保证人脸结构不变的同时,使得迁移后的眼影更均衡,唇膏色彩更保真,粉底迁移更精细.在国际通用VMU和DLMT美妆数据库上进行实验研究,结果表明,该方法取得了更协调的视觉效果、更快的上妆速度、更多样的同人异妆和异人同妆的迁移风格,优于对比方法.
中文关键词: 妆容迁移  深度网络  区域校准  多通路
Abstract:Makeup transfer is a task that can transfer the reference-makeup to the before-makeup face, where makeup style is maintained. It provides a fast and efficient solution for visualizing candidate makeups, receiving extensive academical and industrial attention. However, the difference, such as the human face, eyebrow distance, and lip shape is more or less ignored by recent works, leading to a problem that the facial structures are lost. Moreover, the lack of datasets which are composed of before-makeup and after-makeup images also provides additional difficulties. To this end, this study proposes a regional fast makeup transfer multi-way network. Specifically, the key points of faces are firstly detected and these points are used to align different faces in an end-to-end style. Then, three way-specifical losses are used to optimize the makeup transfer network jointly. These losses are designed with the apparent properties of the eyeshadow, the lipstick, and the foundation, which shows better makeup results. Finally, poisson matting is used to fuse multi-way outputs. Compared to the previous works, the proposed model requires smaller storage space and has faster speed. It can keep the structure of the original face and lead to balancer eyeshadow, more vivid lipstick color, and more detailed foundation make-up. The proposed model is estimated on two universal makeup transfer datasets (i.e., VMU and DLMT). The experimental results show that this proposed method achieves a better visual effect. Besides, it also outperforms the alternatives in terms of makeup speed, the effect of transferring different style to the same person, and transferring the same style to the various people.
文章编号:     中图分类号:TP391    文献标志码:
基金项目:国家自然科学基金(61472423,61432008,U1636220) 国家自然科学基金(61472423,61432008,U1636220)
Foundation items:National Natural Science Foundation of China (61472423, 61432008, U1636220)
Reference text:


HUANG Yan,HE Ze-Wen,ZHANG Wen-Sheng.Fast Multi-way Regional Makeup Transfer Deep Network.Journal of Software,2019,30(11):3549-3566