FairViT: Fair Vision Transformer via Adaptive Masking

Abstract

Vision Transformer (ViT) has achieved excellent performance and demonstrated its promising potential in various computer vision tasks. The wide deployment of ViT in real-world tasks requires a thorough understanding of the societal impact of the model. However, most ViT-based works do not take fairness into account and it is unclear whether directly applying CNN-oriented debiased algorithm to ViT is feasible. Moreover, previous works typically sacrifice accuracy for fairness. Therefore, we aim to develop an algorithm that improves accuracy without sacrificing fairness. In this paper, we propose FairViT, a novel accurate and fair ViT framework. To this end, we introduce a novel distance loss and deploy adaptive fairness-aware masks on attention layers updating with model parameters. Experimental results show can achieve accuracy better than other alternatives, even with competitive computational efficiency. Furthermore, achieves appreciable fairness results.

Cite

Text

Tian et al. "FairViT: Fair Vision Transformer via Adaptive Masking." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-73650-6_26

Markdown

[Tian et al. "FairViT: Fair Vision Transformer via Adaptive Masking." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/tian2024eccv-fairvit/) doi:10.1007/978-3-031-73650-6_26

BibTeX

@inproceedings{tian2024eccv-fairvit,
  title     = {{FairViT: Fair Vision Transformer via Adaptive Masking}},
  author    = {Tian, Bowei and Du, Ruijie and Shen, Yanning},
  booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
  year      = {2024},
  doi       = {10.1007/978-3-031-73650-6_26},
  url       = {https://mlanthology.org/eccv/2024/tian2024eccv-fairvit/}
}