Random Erasing vs. Model Inversion: A Promising Defense or a False Hope?

Abstract

Model Inversion (MI) attacks pose a significant privacy threat by reconstructing private training data from machine learning models. While existing defenses primarily concentrate on model-centric approaches, the impact of data on MI robustness remains largely unexplored. In this work, we explore Random Erasing (RE)—a technique traditionally used for improving model generalization under occlusion—and uncover its surprising effectiveness as a defense against MI attacks. Specifically, our novel feature space analysis shows that model trained with RE-images introduces a significant discrepancy between the features of MI-reconstructed images and those of the private data. At the same time, features of private images remain distinct from other classes and well-separated from different classification regions. These effects collectively degrade MI reconstruction quality and attack accuracy while maintaining reasonable natural accuracy. Furthermore, we explore two critical properties of RE including Partial Erasure and Random Location. First, Partial Erasure prevents the model from observing entire objects during training, and we find that this has significant impact on MI, which aims to reconstruct the entire objects. Second, the Random Location of erasure plays a crucial role in achieving a strong privacy-utility trade-off. Our findings highlight RE as a simple yet effective defense mechanism that can be easily integrated with existing privacy-preserving techniques. Extensive experiments of 37 setups demonstrate that our method achieves SOTA performance in privacy-utility tradeoff. The results consistently demonstrate the superiority of our defense over existing defenses across different MI attacks, network architectures, and attack configurations. For the first time, we achieve significant degrade in attack accuracy without decrease in utility for some configurations. Our code and additional results are available at: https://ngoc-nguyen-0.github.io/MIDRE/

Cite

Text

Tran et al. "Random Erasing vs. Model Inversion: A Promising Defense or a False Hope?." Transactions on Machine Learning Research, 2025.

Markdown

[Tran et al. "Random Erasing vs. Model Inversion: A Promising Defense or a False Hope?." Transactions on Machine Learning Research, 2025.](https://mlanthology.org/tmlr/2025/tran2025tmlr-random/)

BibTeX

@article{tran2025tmlr-random,
  title     = {{Random Erasing vs. Model Inversion: A Promising Defense or a False Hope?}},
  author    = {Tran, Viet-Hung and Nguyen, Ngoc-Bao and Mai, Son T. and Vandierendonck, Hans and Assent, Ira and Kot, Alex and Cheung, Ngai-Man},
  journal   = {Transactions on Machine Learning Research},
  year      = {2025},
  url       = {https://mlanthology.org/tmlr/2025/tran2025tmlr-random/}
}