Model-Independent Online Learning for Influence Maximization
Abstract
We consider influence maximization (IM) in social networks, which is the problem of maximizing the number of users that become aware of a product by selecting a set of “seed” users to expose the product to. While prior work assumes a known model of information diffusion, we propose a novel parametrization that not only makes our framework agnostic to the underlying diffusion model, but also statistically efficient to learn from data. We give a corresponding monotone, submodular surrogate function, and show that it is a good approximation to the original IM objective. We also consider the case of a new marketer looking to exploit an existing social network, while simultaneously learning the factors governing information propagation. For this, we propose a pairwise-influence semi-bandit feedback model and develop a LinUCB-based bandit algorithm. Our model-independent analysis shows that our regret bound has a better (as compared to previous work) dependence on the size of the network. Experimental evaluation suggests that our framework is robust to the underlying diffusion model and can efficiently learn a near-optimal solution.
Cite
Text
Vaswani et al. "Model-Independent Online Learning for Influence Maximization." International Conference on Machine Learning, 2017.Markdown
[Vaswani et al. "Model-Independent Online Learning for Influence Maximization." International Conference on Machine Learning, 2017.](https://mlanthology.org/icml/2017/vaswani2017icml-modelindependent/)BibTeX
@inproceedings{vaswani2017icml-modelindependent,
title = {{Model-Independent Online Learning for Influence Maximization}},
author = {Vaswani, Sharan and Kveton, Branislav and Wen, Zheng and Ghavamzadeh, Mohammad and Lakshmanan, Laks V. S. and Schmidt, Mark},
booktitle = {International Conference on Machine Learning},
year = {2017},
pages = {3530-3539},
volume = {70},
url = {https://mlanthology.org/icml/2017/vaswani2017icml-modelindependent/}
}