Knowledge Infused Policy Gradients with Upper Confidence Bound for Relational Bandits

Abstract

Contextual Bandits find important use cases in various real-life scenarios such as online advertising, recommendation systems, healthcare, etc. However, most of the algorithms use flat feature vectors to represent context whereas, in the real world, there is a varying number of objects and relations among them to model in the context. For example, in a music recommendation system, the user context contains what music they listen to, which artists create this music, the artist albums, etc. Adding richer relational context representations also introduces a much larger context space making exploration-exploitation harder. To improve the efficiency of exploration-exploitation knowledge about the context can be infused to guide the exploration-exploitation strategy. Relational context representations allow a natural way for humans to specify knowledge owing to their descriptive nature. We propose an adaptation of Knowledge Infused Policy Gradients to the Contextual Bandit setting and a novel Knowledge Infused Policy Gradients Upper Confidence Bound algorithm and perform an experimental analysis of a simulated music recommendation dataset and various real-life datasets where expert knowledge can drastically reduce the total regret and where it cannot.

Cite

Text

Roy et al. "Knowledge Infused Policy Gradients with Upper Confidence Bound for Relational Bandits." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2021. doi:10.1007/978-3-030-86486-6_3

Markdown

[Roy et al. "Knowledge Infused Policy Gradients with Upper Confidence Bound for Relational Bandits." European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2021.](https://mlanthology.org/ecmlpkdd/2021/roy2021ecmlpkdd-knowledge/) doi:10.1007/978-3-030-86486-6_3

BibTeX

@inproceedings{roy2021ecmlpkdd-knowledge,
  title     = {{Knowledge Infused Policy Gradients with Upper Confidence Bound for Relational Bandits}},
  author    = {Roy, Kaushik and Zhang, Qi and Gaur, Manas and Sheth, Amit P.},
  booktitle = {European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases},
  year      = {2021},
  pages     = {35-50},
  doi       = {10.1007/978-3-030-86486-6_3},
  url       = {https://mlanthology.org/ecmlpkdd/2021/roy2021ecmlpkdd-knowledge/}
}