all AI news
Influencing Bandits: Arm Selection for Preference Shaping
March 4, 2024, 5:41 a.m. | Viraj Nadkarni, D. Manjunath, Sharayu Moharir
cs.LG updates on arXiv.org arxiv.org
Abstract: We consider a non stationary multi-armed bandit in which the population preferences are positively and negatively reinforced by the observed rewards. The objective of the algorithm is to shape the population preferences to maximize the fraction of the population favouring a predetermined arm. For the case of binary opinions, two types of opinion dynamics are considered -- decreasing elasticity (modeled as a Polya urn with increasing number of balls) and constant elasticity (using the voter …
abstract algorithm arm arxiv binary case cs.ai cs.ir cs.lg cs.sy eess.sy population the algorithm type
More from arxiv.org / cs.LG updates on arXiv.org
Efficient Data-Driven MPC for Demand Response of Commercial Buildings
2 days, 20 hours ago |
arxiv.org
Testing the Segment Anything Model on radiology data
2 days, 20 hours ago |
arxiv.org
Calorimeter shower superresolution
2 days, 20 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US