June 23, 2022, 1:13 a.m. | Mitchell Wortsman, Gabriel Ilharco, Samir Yitzhak Gadre, Rebecca Roelofs, Raphael Gontijo-Lopes, Ari S. Morcos, Hongseok Namkoong, Ali Farhadi, Yair C

cs.CV updates on arXiv.org arxiv.org

The conventional recipe for maximizing model accuracy is to (1) train
multiple models with various hyperparameters and (2) pick the individual model
which performs best on a held-out validation set, discarding the remainder. In
this paper, we revisit the second step of this procedure in the context of
fine-tuning large pre-trained models, where fine-tuned models often appear to
lie in a single low error basin. We show that averaging the weights of multiple
models fine-tuned with different hyperparameter configurations often …

accuracy arxiv inference lg time

Data Scientist (m/f/x/d)

@ Symanto Research GmbH & Co. KG | Spain, Germany

Enterprise Data Quality, Senior Analyst

@ Toyota North America | Plano

Data Analyst & Audit Management Software (AMS) Coordinator

@ World Vision | Philippines - Home Working

Product Manager Power BI Platform Tech I&E Operational Insights

@ ING | HBP (Amsterdam - Haarlerbergpark)

Sr. Director, Software Engineering, Clinical Data Strategy

@ Moderna | USA-Washington-Seattle-1099 Stewart Street

Data Engineer (Data as a Service)

@ Xplor | Atlanta, GA, United States