Semantic Relationships Guided Facial Action Unit Analysis
Author:
Affiliation:

Clc Number:

TP391

Fund Project:

  • Article
  • |
  • Figures
  • |
  • Metrics
  • |
  • Reference
  • |
  • Related
  • |
  • Cited by
  • |
  • Materials
  • |
  • Comments
    Abstract:

    The main purpose of facial action unit analysis is to identify the state of each facial action unit, which can be applied to many scenarios such as lie detection, autonomous driving, intelligent medical, and others. In recent years, with the popularization of deep learning in the field of computer vision, facial action unit analysis has attracted extensive attention. Face action unit analysis can be divided into two different tasks: face action unit recognition and face action unit intensity estimation. However, the existing studies usually only address one of the problems. More importantly, these methods usually only focus on designing or learning complex feature representations, but ignore the semantic correlation between facial action units. Actually, facial action units often have strong interrelationships. How to effectively use semantic knowledge for learning and reasoning is the key to facial action unit analysis tasks. This study explores to model the semantic relationship of facial action units by analyzing the symbiosis and mutual exclusion of AUs in various facial behaviors and organize the facial AUs in the form of structured knowledge-graph, and then propose an AU semantic relationship embedded representation learning (SRERL) framework. The experiments are conducted on three benchmarks: BP4D, DISFA, and FERA2015 for both facial action unit analysis tasks. The experimental results show that the proposed method outperforms the previous work and achieves state-of-the-art performance. Furthermore, the experiments are also conducted on the BP4D+ dataset and occlusion evaluation is performed on the BP4D dataset to demonstrate the outstanding generalization and robustness of proposed method.

    Reference
    Related
    Cited by
Get Citation

李冠彬,张锐斐,朱鑫,林倞.语义关系引导的面部动作单元分析.软件学报,2023,34(6):2922-2941

Copy
Share
Article Metrics
  • Abstract:
  • PDF:
  • HTML:
  • Cited by:
History
  • Received:March 26,2021
  • Revised:June 07,2021
  • Adopted:
  • Online: November 30,2022
  • Published:
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063