Position Debiasing Fine-Tuning for Causal Perception in Long-Term Dialogue
Abstract
Personalized federated learning (PFL) tailors models to clients' unique data distributions while preserving privacy. However, existing aggregation-weight-based PFL methods often struggle with heterogeneous data, facing challenges in accuracy, computational efficiency, and communication overhead. We propose FedAPA, a novel PFL method featuring a server-side, gradient-based adaptive aggregation strategy to generate personalized models, by updating aggregation weights based on gradients of client-parameter changes with respect to the aggregation weights in a centralized manner. FedAPA guarantees theoretical convergence and achieves superior accuracy and computational efficiency compared to 10 PFL competitors across three datasets, with competitive communication overhead. The code and full proofs are available at: https://github.com/Yuxia-Sun/FL_FedAPA.
Cite
Text
Fan et al. "Position Debiasing Fine-Tuning for Causal Perception in Long-Term Dialogue." International Joint Conference on Artificial Intelligence, 2024. doi:10.24963/ijcai.2024/692Markdown
[Fan et al. "Position Debiasing Fine-Tuning for Causal Perception in Long-Term Dialogue." International Joint Conference on Artificial Intelligence, 2024.](https://mlanthology.org/ijcai/2024/fan2024ijcai-position/) doi:10.24963/ijcai.2024/692BibTeX
@inproceedings{fan2024ijcai-position,
title = {{Position Debiasing Fine-Tuning for Causal Perception in Long-Term Dialogue}},
author = {Fan, Shixuan and Wei, Wei and Li, Wendi and Mao, Xian-Ling and Xie, Wenfeng and Chen, Dangyang},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2024},
pages = {6261-6269},
doi = {10.24963/ijcai.2024/692},
url = {https://mlanthology.org/ijcai/2024/fan2024ijcai-position/}
}