March 5, 2024, 2:44 p.m. | Shuhei Watanabe, Neeratyoy Mallik, Edward Bergman, Frank Hutter

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.01888v1 Announce Type: cross
Abstract: While deep learning has celebrated many successes, its results often hinge on the meticulous selection of hyperparameters (HPs). However, the time-consuming nature of deep learning training makes HP optimization (HPO) a costly endeavor, slowing down the development of efficient HPO tools. While zero-cost benchmarks, which provide performance and runtime without actual training, offer a solution for non-parallel setups, they fall short in parallel setups as each worker must communicate its queried runtime to return its …

abstract arxiv asynchronous benchmarking benchmarks cost cs.ai cs.lg deep learning deep learning training development endeavor fidelity hinge nature optimization results tools training type

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Analyst (Digital Business Analyst)

@ Activate Interactive Pte Ltd | Singapore, Central Singapore, Singapore