Web: http://arxiv.org/abs/2202.00821

June 20, 2022, 1:12 a.m. | Tom Blau, Edwin V. Bonilla, Iadine Chades, Amir Dezfouli

stat.ML updates on arXiv.org arxiv.org

Bayesian approaches developed to solve the optimal design of sequential
experiments are mathematically elegant but computationally challenging.
Recently, techniques using amortization have been proposed to make these
Bayesian approaches practical, by training a parameterized policy that proposes
designs efficiently at deployment time. However, these methods may not
sufficiently explore the design space, require access to a differentiable
probabilistic model and can only optimize over continuous design spaces. Here,
we address these limitations by showing that the problem of optimizing policies …

arxiv deep design experimental learning lg reinforcement reinforcement learning

More from arxiv.org / stat.ML updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY