FFAM: Feature Factorization Activation mAP for Explanation of 3D Detectors

Abstract

LiDAR-based 3D object detection has made impressive progress recently, yet most existing models are black-box, lacking interpretability. Previous explanation approaches primarily focus on analyzing image-based models and are not readily applicable to LiDAR-based 3D detectors. In this paper, we propose a feature factorization activation map (FFAM) to generate high-quality visual explanations for 3D detectors. FFAM employs non-negative matrix factorization to generate concept activation maps and subsequently aggregates these maps to obtain a global visual explanation. To achieve object-specific visual explanations, we refine the global visual explanation using the feature gradient of a target object. Additionally, we introduce a voxel upsampling strategy to align the scale between the activation map and input point cloud. We qualitatively and quantitatively analyze FFAM with multiple detectors on several datasets. Experimental results validate the high-quality visual explanations produced by FFAM. The code is available at \url{https://anonymous.4open.science/r/FFAM-B9AF}.

Cite

Text

Liu et al. "FFAM: Feature Factorization Activation mAP for Explanation of 3D Detectors." Neural Information Processing Systems, 2024. doi:10.52202/079017-2555

Markdown

[Liu et al. "FFAM: Feature Factorization Activation mAP for Explanation of 3D Detectors." Neural Information Processing Systems, 2024.](https://mlanthology.org/neurips/2024/liu2024neurips-ffam/) doi:10.52202/079017-2555

BibTeX

@inproceedings{liu2024neurips-ffam,
  title     = {{FFAM: Feature Factorization Activation mAP for Explanation of 3D Detectors}},
  author    = {Liu, Shuai and Li, Boyang and Fang, Zhiyu and Cui, Mingyue and Huang, Kai},
  booktitle = {Neural Information Processing Systems},
  year      = {2024},
  doi       = {10.52202/079017-2555},
  url       = {https://mlanthology.org/neurips/2024/liu2024neurips-ffam/}
}