April 24, 2023, 12:46 a.m. | Shuhei Watanabe, Noor Awad, Masaki Onishi, Frank Hutter

cs.LG updates on arXiv.org arxiv.org

Hyperparameter optimization (HPO) is a vital step in improving performance in
deep learning (DL). Practitioners are often faced with the trade-off between
multiple criteria, such as accuracy and latency. Given the high computational
needs of DL and the growing demand for efficient HPO, the acceleration of
multi-objective (MO) optimization becomes ever more important. Despite the
significant body of work on meta-learning for HPO, existing methods are
inapplicable to MO tree-structured Parzen estimator (MO-TPE), a simple yet
powerful MO-HPO algorithm. In …

accuracy algorithm arxiv computational deep learning demand hierarchical hyperparameter latency meta meta-learning multiple optimization paper performance trade tree work

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Analyst (H/F)

@ Business & Decision | Montpellier, France

Machine Learning Researcher

@ VERSES | Brighton, England, United Kingdom - Remote