April 22, 2024, 4:43 a.m. | Sigrid Passano Hellan, Christopher G. Lucas, Nigel H. Goddard

cs.LG updates on arXiv.org arxiv.org

arXiv:2311.14653v2 Announce Type: replace
Abstract: Transfer learning for Bayesian optimisation has generally assumed a strong similarity between optimisation tasks, with at least a subset having similar optimal inputs. This assumption can reduce computational costs, but it is violated in a wide range of optimisation problems where transfer learning may nonetheless be useful. We replace this assumption with a weaker one only requiring the shape of the optimisation landscape to be similar, and analyse the recent method Prior Learning for Bayesian …

abstract arxiv bayesian computational costs cs.lg data data-driven inputs least optimisation prior reduce stat.ml tasks transfer transfer learning type

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne