HFIA: A Parasitic Feature Inference Attack and Gradient-Based Defense Strategy in SplitNN-Based Vertical Federated Learning
Abstract
Vertical Federated Learning (VFL) is widely adopted in industries like healthcare, enabling collaborators to enhance model performance using disparate data sources. Split Neural Networks (SplitNN) are central to two-party VFL setups, providing enhanced data privacy during collaboration. However, an untrustworthy server owner, referred to as the host, may exploit its position to infer sensitive client-side features during training. Our research introduces Hitchhike Feature Inference Attack (HFIA), where the host leverages a minimal auxiliary dataset (less than 1% of total data) to infer sensitive features with high accuracy (up to 99%) before VFL training is completed. To mitigate this privacy risk, we propose a client-side defense strategy. Clients construct shadow models to simulate the attacker’s approach and introduce gradient-based adversarial noise to local embeddings, significantly reducing feature leakage. Experiments demonstrate that HFIA achieves high attack success rates, while defense method can reduce attack macro_auc to approximately 60%, with minimal impact ( $<5\%$ < 5 % decrease) on the normal VFL task. The defense can reduce attack macro_auc by over 20% and does not impose restrictions on VFL model construction. In practical applications, participants can adopt this approach to effectively mitigate training-time privacy leakage and protect sensitive client-side data from malicious inference.
Cite
Text
Dong et al. "HFIA: A Parasitic Feature Inference Attack and Gradient-Based Defense Strategy in SplitNN-Based Vertical Federated Learning." Machine Learning, 2025. doi:10.1007/S10994-025-06804-2Markdown
[Dong et al. "HFIA: A Parasitic Feature Inference Attack and Gradient-Based Defense Strategy in SplitNN-Based Vertical Federated Learning." Machine Learning, 2025.](https://mlanthology.org/mlj/2025/dong2025mlj-hfia/) doi:10.1007/S10994-025-06804-2BibTeX
@article{dong2025mlj-hfia,
title = {{HFIA: A Parasitic Feature Inference Attack and Gradient-Based Defense Strategy in SplitNN-Based Vertical Federated Learning}},
author = {Dong, Qixuan and Zhou, Boyang and Ru, ZhiQiang and He, Ying and Hua, Jingyu and Zhong, Sheng},
journal = {Machine Learning},
year = {2025},
pages = {170},
doi = {10.1007/S10994-025-06804-2},
volume = {114},
url = {https://mlanthology.org/mlj/2025/dong2025mlj-hfia/}
}