April 26, 2024, 4:42 a.m. | Abhinav Pomalapally, Bassel El Mabsout, Renato Mansuco

cs.LG updates on arXiv.org arxiv.org

arXiv:2310.14671v2 Announce Type: replace
Abstract: In contemporary machine learning workloads, numerous hyper-parameter search algorithms are frequently utilized to efficiently discover high-performing hyper-parameter values, such as learning and regularization rates. As a result, a range of parameter schedules have been designed to leverage the capability of adjusting hyper-parameters during training to enhance loss performance. These schedules, however, introduce new hyper-parameters to be searched and do not account for the current loss values of the models being trained.
To address these issues, …

abstract adjusting algorithms arxiv capability cs.lg loss machine machine learning parameters performance regularization search search algorithms training type values workloads

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US