gSASRec: Reducing Overconfidence in Sequential Recommendation Trained with Negative Sampling (Extended Abstract)
Abstract
We study two logics, LTLf+ and PPLTL+, to express properties of infinite traces, that are based on the linear-time temporal logics LTLf and PPLTL on finite traces. LTLf+/PPLTL+ use levels of Manna and Pnueli’s LTL safety-progress hierarchy, and thus have the same expressive power as LTL. However, they also retain a crucial characteristic of reactive synthesis for the base logics: the game arena for strategy extraction can be derived from deterministic finite automata (DFA). Consequently, these logics circumvent the notorious difficulties associated with determinizing infinite trace automata, typical of LTL synthesis. We present optimal DFA-based technique for solving reactive synthesis for LTLf+ and PPLTL+. Additionally, we adapt these algorithms to optimally solve satisfiability and model-checking for these two logics.
Cite
Text
Petrov and Macdonald. "gSASRec: Reducing Overconfidence in Sequential Recommendation Trained with Negative Sampling (Extended Abstract)." International Joint Conference on Artificial Intelligence, 2024. doi:10.24963/ijcai.2024/939Markdown
[Petrov and Macdonald. "gSASRec: Reducing Overconfidence in Sequential Recommendation Trained with Negative Sampling (Extended Abstract)." International Joint Conference on Artificial Intelligence, 2024.](https://mlanthology.org/ijcai/2024/petrov2024ijcai-gsasrec/) doi:10.24963/ijcai.2024/939BibTeX
@inproceedings{petrov2024ijcai-gsasrec,
title = {{gSASRec: Reducing Overconfidence in Sequential Recommendation Trained with Negative Sampling (Extended Abstract)}},
author = {Petrov, Aleksandr V. and Macdonald, Craig},
booktitle = {International Joint Conference on Artificial Intelligence},
year = {2024},
pages = {8447-8449},
doi = {10.24963/ijcai.2024/939},
url = {https://mlanthology.org/ijcai/2024/petrov2024ijcai-gsasrec/}
}