all AI news
Provably Efficient Reinforcement Learning for Online Adaptive Influence Maximization. (arXiv:2206.14846v1 [cs.LG])
July 1, 2022, 1:11 a.m. | Kaixuan Huang, Yu Wu, Xuezhou Zhang, Shenyinying Tu, Qingyun Wu, Mengdi Wang, Huazheng Wang
stat.ML updates on arXiv.org arxiv.org
Online influence maximization aims to maximize the influence spread of a
content in a social network with unknown network model by selecting a few seed
nodes. Recent studies followed a non-adaptive setting, where the seed nodes are
selected before the start of the diffusion process and network parameters are
updated when the diffusion stops. We consider an adaptive version of
content-dependent online influence maximization problem where the seed nodes
are sequentially activated based on real-time feedback. In this paper, we …
arxiv influence learning lg reinforcement reinforcement learning
More from arxiv.org / stat.ML updates on arXiv.org
Mixture of partially linear experts
18 hours ago |
arxiv.org
Adaptive deep learning for nonlinear time series models
1 day, 18 hours ago |
arxiv.org
A Full Adagrad algorithm with O(Nd) operations
1 day, 18 hours ago |
arxiv.org
Minimax Regret Learning for Data with Heterogeneous Subgroups
1 day, 18 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote