Recurrent Dirichlet Belief Networks for Interpretable Dynamic Relational Data Modelling

Abstract

Online kernel selection in continuous kernel space is more complex than that in discrete kernel set. But existing online kernel selection approaches for continuous kernel spaces have linear computational complexities at each round with respect to the current number of rounds and lack sublinear regret guarantees due to the continuously many candidate kernels. To address these issues, we propose a novel hypothesis sketching approach to online kernel selection in continuous kernel space, which has constant computational complexities at each round and enjoys a sublinear regret bound. The main idea of the proposed hypothesis sketching approach is to maintain the orthogonality of the basis functions and the prediction accuracy of the hypothesis sketches in a time-varying reproducing kernel Hilbert space. We first present an efficient dependency condition to maintain the basis functions of the hypothesis sketches under a computational budget. Then we update the weights and the optimal kernels by minimizing the instantaneous loss of the hypothesis sketches using the online gradient descent with a compensation strategy. We prove that the proposed hypothesis sketching approach enjoys a regret bound of order O(√T) for online kernel selection in continuous kernel space, which is optimal for convex loss functions, where T is the number of rounds, and reduces the computational complexities at each round from linear to constant with respect to the number of rounds. Experimental results demonstrate that the proposed hypothesis sketching approach significantly improves the efficiency of online kernel selection in continuous kernel space while retaining comparable predictive accuracies.

Cite

Text

Li et al. "Recurrent Dirichlet Belief Networks for Interpretable Dynamic Relational Data Modelling." International Joint Conference on Artificial Intelligence, 2020. doi:10.24963/IJCAI.2020/342

Markdown

[Li et al. "Recurrent Dirichlet Belief Networks for Interpretable Dynamic Relational Data Modelling." International Joint Conference on Artificial Intelligence, 2020.](https://mlanthology.org/ijcai/2020/li2020ijcai-recurrent/) doi:10.24963/IJCAI.2020/342

BibTeX

@inproceedings{li2020ijcai-recurrent,
  title     = {{Recurrent Dirichlet Belief Networks for Interpretable Dynamic Relational Data Modelling}},
  author    = {Li, Yaqiong and Fan, Xuhui and Chen, Ling and Li, Bin and Yu, Zheng and Sisson, Scott A.},
  booktitle = {International Joint Conference on Artificial Intelligence},
  year      = {2020},
  pages     = {2470-2476},
  doi       = {10.24963/IJCAI.2020/342},
  url       = {https://mlanthology.org/ijcai/2020/li2020ijcai-recurrent/}
}