Adversarial Attacks on Federated-Learned Adaptive Bitrate Algorithms
Abstract
Learning-based adaptive bitrate (ABR) algorithms have revolutionized video streaming solutions. With the growing demand for data privacy and the rapid development of mobile devices, federated learning (FL) has emerged as a popular training method for neural ABR algorithms in both academia and industry. However, we have discovered that FL-based ABR models are vulnerable to model-poisoning attacks as local updates remain unseen during global aggregation. In response, we propose MAFL (Malicious ABR model based on Federated Learning) to prove that backdooring the learning-based ABR model via FL is practical. Instead of attacking the global policy, MAFL only targets a single ``target client''. Moreover, the unique challenges brought by deep reinforcement learning (DRL) make the attack even more challenging. To address these challenges, MAFL is designed with a two-stage attacking mechanism. Using two representative attack cases with real-world traces, we show that MAFL significantly degrades the model performance on the target client (i.e., increasing rebuffering penalty by 2x and 5x) with a minimal negative impact on benign clients.
Cite
Text
Zhang and Huang. "Adversarial Attacks on Federated-Learned Adaptive Bitrate Algorithms." AAAI Conference on Artificial Intelligence, 2024. doi:10.1609/AAAI.V38I1.27796Markdown
[Zhang and Huang. "Adversarial Attacks on Federated-Learned Adaptive Bitrate Algorithms." AAAI Conference on Artificial Intelligence, 2024.](https://mlanthology.org/aaai/2024/zhang2024aaai-adversarial/) doi:10.1609/AAAI.V38I1.27796BibTeX
@inproceedings{zhang2024aaai-adversarial,
title = {{Adversarial Attacks on Federated-Learned Adaptive Bitrate Algorithms}},
author = {Zhang, Rui-Xiao and Huang, Tianchi},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2024},
pages = {419-427},
doi = {10.1609/AAAI.V38I1.27796},
url = {https://mlanthology.org/aaai/2024/zhang2024aaai-adversarial/}
}