all AI news
Contextual Bandits in a Survey Experiment on Charitable Giving: Within-Experiment Outcomes versus Policy Learning. (arXiv:2211.12004v1 [econ.EM])
Nov. 23, 2022, 2:11 a.m. | Susan Athey, Undral Byambadalai, Vitor Hadad, Sanath Kumar Krishnamurthy, Weiwen Leung, Joseph Jay Williams
cs.LG updates on arXiv.org arxiv.org
We design and implement an adaptive experiment (a ``contextual bandit'') to
learn a targeted treatment assignment policy, where the goal is to use a
participant's survey responses to determine which charity to expose them to in
a donation solicitation. The design balances two competing objectives:
optimizing the outcomes for the subjects in the experiment (``cumulative regret
minimization'') and gathering data that will be most useful for policy
learning, that is, for learning an assignment rule that will maximize welfare
if …
More from arxiv.org / cs.LG updates on arXiv.org
Efficient Data-Driven MPC for Demand Response of Commercial Buildings
2 days, 23 hours ago |
arxiv.org
Testing the Segment Anything Model on radiology data
2 days, 23 hours ago |
arxiv.org
Calorimeter shower superresolution
2 days, 23 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US