SIMBA: Split Inference - Mechanisms, Benchmarks and Attacks
Abstract
In this work, we tackle the question of how to benchmark reconstruction of inputs from deep neural networks (DNN) representations. This inverse problem is of great importance in the privacy community where obfuscation of features has been proposed as a technique for privacy-preserving machine learning (ML) inference. In this benchmark, we characterize different obfuscation techniques and design different attack models. We propose multiple reconstruction techniques based upon distinct background knowledge of the adversary. We develop a modular platform that integrates different obfuscation techniques, reconstruction algorithms, and evaluation metrics under a common framework. Using our platform, we benchmark various obfuscation and reconstruction techniques for evaluating their privacy-utility trade-off. Finally, we release a dataset of obfuscated representations to foster research in this area. We have open-sourced code, dataset, hyper-parameters, and trained models that can be found at https: //github.com/aidecentralized/InferenceBenchmark.
Cite
Text
Singh et al. "SIMBA: Split Inference - Mechanisms, Benchmarks and Attacks." Proceedings of the European Conference on Computer Vision (ECCV), 2024. doi:10.1007/978-3-031-73116-7_13Markdown
[Singh et al. "SIMBA: Split Inference - Mechanisms, Benchmarks and Attacks." Proceedings of the European Conference on Computer Vision (ECCV), 2024.](https://mlanthology.org/eccv/2024/singh2024eccv-simba/) doi:10.1007/978-3-031-73116-7_13BibTeX
@inproceedings{singh2024eccv-simba,
title = {{SIMBA: Split Inference - Mechanisms, Benchmarks and Attacks}},
author = {Singh, Abhishek and Sharma, Vivek and Sukumaran, Rohan and Mose, John J and Chiu, Jeffrey K and Yu, Justin and Raskar, Ramesh},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2024},
doi = {10.1007/978-3-031-73116-7_13},
url = {https://mlanthology.org/eccv/2024/singh2024eccv-simba/}
}