Leveraging Multi-Color Spaces as a Defense Mechanism Against Model Inversion Attack

Abstract

Privacy is of increasing importance in the world of machine learning in general and in healthcare more specifically due to the nature of patients data. Multiple type of security attacks and mechanisms already exist which allow adversaries to extract sensitive information based only from a high-level interaction with a trained machine learning model. This paper specifically addresses the model inversion attack, which aims to reconstruct input data from a model's output. This paper describes a novel approach of using multi-color spaces as a defense mechanism against this type of attack to strengthen the privacy of open source models trained on image data. The main idea of our approach is to use a combination of those color spaces to create a more generic representation and reduce the quality of the reconstruction coming from a model inversion attack while maintaining a good classification performance. We evaluate the privacy-utility ratio of our proposed security method on retina images.

Cite

Text

Ouaari et al. "Leveraging Multi-Color Spaces as a Defense Mechanism Against Model Inversion Attack." ICML 2024 Workshops: NextGenAISafety, 2024.

Markdown

[Ouaari et al. "Leveraging Multi-Color Spaces as a Defense Mechanism Against Model Inversion Attack." ICML 2024 Workshops: NextGenAISafety, 2024.](https://mlanthology.org/icmlw/2024/ouaari2024icmlw-leveraging/)

BibTeX

@inproceedings{ouaari2024icmlw-leveraging,
  title     = {{Leveraging Multi-Color Spaces as a Defense Mechanism Against Model Inversion Attack}},
  author    = {Ouaari, Sofiane and Ünal, Ali Burak and Akgün, Mete and Pfeifer, Nico},
  booktitle = {ICML 2024 Workshops: NextGenAISafety},
  year      = {2024},
  url       = {https://mlanthology.org/icmlw/2024/ouaari2024icmlw-leveraging/}
}