Federated Recommendation with Explicitly Encoding Item Bias
Abstract
With the development of federated learning techniques and the increased need for user privacy protection, the federated recommendation has become a new recommendation paradigm. However, most existing works focus on user-level federated recommendation, leaving platform-level federated recommendation largely unexplored. A significant challenge in platform-level federated recommendation scenarios is severe label skew. Users behave in various ways on different platforms, bringing up the rating and item bias problem. In this work, we propose FREIB (Federated Recommendation with Explicitly Encoding Item Bias). The core idea is explicitly encoding item bias during federated learning, addressing the problem of fuzzy item bias, and achieving consistent representation in label skew scenarios. We achieve this by utilizing global knowledge guidance to model common rating patterns and by aligning feature prototypes to enhance item encoding at the same rating level. Extensive experiments conducted on three public datasets demonstrate the superiority of our method over several state-of-the-art approaches.
Cite
Text
Wang et al. "Federated Recommendation with Explicitly Encoding Item Bias." AAAI Conference on Artificial Intelligence, 2025. doi:10.1609/AAAI.V39I12.33395Markdown
[Wang et al. "Federated Recommendation with Explicitly Encoding Item Bias." AAAI Conference on Artificial Intelligence, 2025.](https://mlanthology.org/aaai/2025/wang2025aaai-federated-a/) doi:10.1609/AAAI.V39I12.33395BibTeX
@inproceedings{wang2025aaai-federated-a,
title = {{Federated Recommendation with Explicitly Encoding Item Bias}},
author = {Wang, Zhihao and Bai, He and Huang, Wenke and Li, Duantengchuan and Wang, Jian and Li, Bing},
booktitle = {AAAI Conference on Artificial Intelligence},
year = {2025},
pages = {12792-12800},
doi = {10.1609/AAAI.V39I12.33395},
url = {https://mlanthology.org/aaai/2025/wang2025aaai-federated-a/}
}