Jan. 21, 2022, 2:10 a.m. | Matthias Feurer, Benjamin Letham, Frank Hutter, Eytan Bakshy

stat.ML updates on arXiv.org arxiv.org

When hyperparameter optimization of a machine learning algorithm is repeated
for multiple datasets it is possible to transfer knowledge to an optimization
run on a new dataset. We develop a new hyperparameter-free ensemble model for
Bayesian optimization that is a generalization of two existing transfer
learning extensions to Bayesian optimization and establish a worst-case bound
compared to vanilla Bayesian optimization. Using a large collection of
hyperparameter optimization benchmark problems, we demonstrate that our
contributions substantially reduce optimization time compared to …

arxiv bayesian learning ml optimization transfer learning

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Alternant Data Engineering

@ Aspire Software | Angers, FR

Senior Software Engineer, Generative AI

@ Google | Dublin, Ireland