Safe-EF: Error Feedback for Non-Smooth Constrained Optimization
Abstract
Federated learning faces severe communication bottlenecks due to the high dimensionality of model updates. Communication compression with contractive compressors (e.g., Top-$K$) is often preferable in practice but can degrade performance without proper handling. Error feedback (EF) mitigates such issues but has been largely restricted for smooth, unconstrained problems, limiting its real-world applicability where non-smooth objectives and safety constraints are critical. We advance our understanding of EF in the canonical non-smooth convex setting by establishing new lower complexity bounds for first-order algorithms with contractive compression. Next, we propose Safe-EF, a novel algorithm that matches our lower bound (up to a constant) while enforcing safety constraints essential for practical applications. Extending our approach to the stochastic setting, we bridge the gap between theory and practical implementation. Extensive experiments in a reinforcement learning setup, simulating distributed humanoid robot training, validate the effectiveness of Safe-EF in ensuring safety and reducing communication complexity.
Cite
Text
Islamov et al. "Safe-EF: Error Feedback for Non-Smooth Constrained Optimization." Proceedings of the 42nd International Conference on Machine Learning, 2025.Markdown
[Islamov et al. "Safe-EF: Error Feedback for Non-Smooth Constrained Optimization." Proceedings of the 42nd International Conference on Machine Learning, 2025.](https://mlanthology.org/icml/2025/islamov2025icml-safeef/)BibTeX
@inproceedings{islamov2025icml-safeef,
title = {{Safe-EF: Error Feedback for Non-Smooth Constrained Optimization}},
author = {Islamov, Rustem and As, Yarden and Fatkhullin, Ilyas},
booktitle = {Proceedings of the 42nd International Conference on Machine Learning},
year = {2025},
pages = {26542-26585},
volume = {267},
url = {https://mlanthology.org/icml/2025/islamov2025icml-safeef/}
}