Aug. 10, 2022, 1:11 a.m. | Quentin Bertrand, Quentin Klopfenstein, Mathurin Massias, Mathieu Blondel, Samuel Vaiter, Alexandre Gramfort, Joseph Salmon

stat.ML updates on arXiv.org arxiv.org

Finding the optimal hyperparameters of a model can be cast as a bilevel
optimization problem, typically solved using zero-order techniques. In this
work we study first-order methods when the inner optimization problem is convex
but non-smooth. We show that the forward-mode differentiation of proximal
gradient descent and proximal coordinate descent yield sequences of Jacobians
converging toward the exact Jacobian. Using implicit differentiation, we show
it is possible to leverage the non-smoothness of the inner problem to speed up
the computation. …

arxiv differentiation learning ml

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Business Intelligence Developer / Analyst

@ Transamerica | Work From Home, USA

Data Analyst (All Levels)

@ Noblis | Bethesda, MD, United States