Future Augmentation with Self-Distillation in Recommendation
Abstract
Sequential recommendation (SR) aims to provide appropriate items a user will click according to the user’s historical behavior sequence. Conventional SR models are trained under the next item prediction task, and thus should deal with two challenges, including the data sparsity of user feedback and the variability and irregularity of user behaviors. Different from natural language sequences in NLP, user behavior sequences in recommendation are much more personalized, irregular, and unordered. Therefore, the current user preferences extracted from user historical behaviors may also have correlations with the next-k (i.e., future clicked) items besides the classical next-1 (i.e., current clicked) item to be predicted. Inspired by this phenomenon, we propose a novel Future augmentation with self-distillation in recommendation (FASRec). It considers future clicked items as augmented positive signals of the current clicks in training, which addresses both data sparsity and behavior irregularity and variability issues. To denoise these augmented future clicks, we further adopt a self-distillation module with the exponential moving average strategy, considering soft labels of self-distillation as confidence for more accurate augmentations. In experiments, FASRec achieves significant and consistent improvements on both offline and online evaluations with different base SR models, confirming its effectiveness and universality. FASRec has been deployed on a widely-used recommendation feed in Tencent. The source codes are in https://github.com/FASRec/FASRec .
Cite
Text
Liu et al. "Future Augmentation with Self-Distillation in Recommendation." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2023. doi:10.1007/978-3-031-43427-3_36Markdown
[Liu et al. "Future Augmentation with Self-Distillation in Recommendation." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2023.](https://mlanthology.org/ecmlpkdd/2023/liu2023ecmlpkdd-future/) doi:10.1007/978-3-031-43427-3_36BibTeX
@inproceedings{liu2023ecmlpkdd-future,
title = {{Future Augmentation with Self-Distillation in Recommendation}},
author = {Liu, Chong and Xie, Ruobing and Liu, Xiaoyang and Wang, Pinzheng and Zheng, Rongqin and Zhang, Lixin and Li, Juntao and Xia, Feng and Lin, Leyu},
booktitle = {European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases},
year = {2023},
pages = {602-618},
doi = {10.1007/978-3-031-43427-3_36},
url = {https://mlanthology.org/ecmlpkdd/2023/liu2023ecmlpkdd-future/}
}