Causal Intervention for Subject-Deconfounded Facial Action Unit Recognition

Abstract

Subject-invariant facial action unit (AU) recognition remains challenging for the reason that the data distribution varies among subjects. In this paper, we propose a causal inference framework for subject-invariant facial action unit recognition. To illustrate the causal effect existing in AU recognition task, we formulate the causalities among facial images, subjects, latent AU semantic relations, and estimated AU occurrence probabilities via a structural causal model. By constructing such a causal diagram, we clarify the causal-effect among variables and propose a plug-in causal intervention module, CIS, to deconfound the confounder Subject in the causal diagram. Extensive experiments conducted on two commonly used AU benchmark datasets, BP4D and DISFA, show the effectiveness of our CIS, and the model with CIS inserted, CISNet, has achieved state-of-the-art performance.

Cite

Text

Chen et al. "Causal Intervention for Subject-Deconfounded Facial Action Unit Recognition." AAAI Conference on Artificial Intelligence, 2022. doi:10.1609/AAAI.V36I1.19914

Markdown

[Chen et al. "Causal Intervention for Subject-Deconfounded Facial Action Unit Recognition." AAAI Conference on Artificial Intelligence, 2022.](https://mlanthology.org/aaai/2022/chen2022aaai-causal/) doi:10.1609/AAAI.V36I1.19914

BibTeX

@inproceedings{chen2022aaai-causal,
  title     = {{Causal Intervention for Subject-Deconfounded Facial Action Unit Recognition}},
  author    = {Chen, Yingjie and Chen, Diqi and Wang, Tao and Wang, Yizhou and Liang, Yun},
  booktitle = {AAAI Conference on Artificial Intelligence},
  year      = {2022},
  pages     = {374-382},
  doi       = {10.1609/AAAI.V36I1.19914},
  url       = {https://mlanthology.org/aaai/2022/chen2022aaai-causal/}
}