Gradient Perturbation Is Underrated for Differentially Private Convex Optimization
Abstract
Policy distillation, which transfers a teacher policy to a student policy has achieved great success in challenging tasks of deep reinforcement learning. This teacher-student framework requires a well-trained teacher model which is computationally expensive. Moreover, the performance of the student model could be limited by the teacher model if the teacher model is not optimal. In the light of collaborative learning, we study the feasibility of involving joint intellectual efforts from diverse perspectives of student models. In this work, we introduce dual policy distillation (DPD), a student-student framework in which two learners operate on the same environment to explore different perspectives of the environment and extract knowledge from each other to enhance their learning. The key challenge in developing this dual learning framework is to identify the beneficial knowledge from the peer learner for contemporary learning-based reinforcement learning algorithms, since it is unclear whether the knowledge distilled from an imperfect and noisy peer learner would be helpful. To address the challenge, we theoretically justify that distilling knowledge from a peer learner will lead to policy improvement and propose a disadvantageous distillation strategy based on the theoretical results. The conducted experiments on several continuous control tasks show that the proposed framework achieves superior performance with a learning-based agent and function approximation without the use of expensive teacher models.
Cite
Text
Yu et al. "Gradient Perturbation Is Underrated for Differentially Private Convex Optimization." International Joint Conference on Artificial Intelligence, 2020. doi:10.24963/IJCAI.2020/431Markdown
[Yu et al. "Gradient Perturbation Is Underrated for Differentially Private Convex Optimization." International Joint Conference on Artificial Intelligence, 2020.](https://mlanthology.org/ijcai/2020/yu2020ijcai-gradient/) doi:10.24963/IJCAI.2020/431BibTeX
@inproceedings{yu2020ijcai-gradient,
title = {{Gradient Perturbation Is Underrated for Differentially Private Convex Optimization}},
author = {Yu, Da and Zhang, Huishuai and Chen, Wei and Yin, Jian and Liu, Tie-Yan},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2020},
pages = {3117-3123},
doi = {10.24963/IJCAI.2020/431},
url = {https://mlanthology.org/ijcai/2020/yu2020ijcai-gradient/}
}