Lifelong Learning via Progressive Distillation and Retrospection
Abstract
Lifelong learning aims at adapting a learned model to new tasks while retaining the knowledge gained earlier. A key challenge for lifelong learning is how to strike a balance between the preservation on old tasks and the adaptation to a new one within a given model. Approaches that combine both objectives in training have been explored in previous works. Yet the performance still suffers from considerable degradation in a long sequence of tasks. In this work, we propose a novel approach to lifelong learning, which tries to seek a better balance between preservation and adaptation via two techniques: Distillation and Retrospection. Specifically, the target model adapts to the new task by knowledge distillation from an intermediate expert, while the previous knowledge is more effectively preserved by caching a small subset of data for old tasks. The combination of Distillation and Retrospection leads to a more gentle learning curve for the target model, and extensive experiments demonstrate that our approach can bring consistent improvements on both old and new tasks.
Cite
Text
Hou et al. "Lifelong Learning via Progressive Distillation and Retrospection." Proceedings of the European Conference on Computer Vision (ECCV), 2018. doi:10.1007/978-3-030-01219-9_27Markdown
[Hou et al. "Lifelong Learning via Progressive Distillation and Retrospection." Proceedings of the European Conference on Computer Vision (ECCV), 2018.](https://mlanthology.org/eccv/2018/hou2018eccv-lifelong/) doi:10.1007/978-3-030-01219-9_27BibTeX
@inproceedings{hou2018eccv-lifelong,
title = {{Lifelong Learning via Progressive Distillation and Retrospection}},
author = {Hou, Saihui and Pan, Xinyu and Change Loy, Chen and Wang, Zilei and Lin, Dahua},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
year = {2018},
doi = {10.1007/978-3-030-01219-9_27},
url = {https://mlanthology.org/eccv/2018/hou2018eccv-lifelong/}
}