Web: http://arxiv.org/abs/2206.08366

June 17, 2022, 1:12 a.m. | Sebastian Ament, Carla Gomes

stat.ML updates on arXiv.org arxiv.org

Bayesian Optimization (BO) has shown great promise for the global
optimization of functions that are expensive to evaluate, but despite many
successes, standard approaches can struggle in high dimensions. To improve the
performance of BO, prior work suggested incorporating gradient information into
a Gaussian process surrogate of the objective, giving rise to kernel matrices
of size $nd \times nd$ for $n$ observations in $d$ dimensions. Na\"ively
multiplying with (resp. inverting) these matrices requires
$\mathcal{O}(n^2d^2)$ (resp. $\mathcal{O}(n^3d^3$)) operations, which becomes
infeasible …

arxiv bayesian lg optimization scalable

More from arxiv.org / stat.ML updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY