### Web: http://arxiv.org/abs/2006.08167

Jan. 28, 2022, 2:11 a.m. | Tesi Xiao, Krishnakumar Balasubramanian, Saeed Ghadimi

We analyze stochastic conditional gradient methods for constrained
optimization problems arising in over-parametrized machine learning. We show
that one could leverage the interpolation-like conditions satisfied by such
models to obtain improved oracle complexities. Specifically, when the objective
function is convex, we show that the conditional gradient method requires
$\mathcal{O}(\epsilon^{-2})$ calls to the stochastic gradient oracle to find an
$\epsilon$-optimal solution. Furthermore, by including a gradient sliding step,
we show that the number of calls reduces to $\mathcal{O}(\epsilon^{-1.5})$.

### Data Architect – Public Sector Health Data Architect, WWPS

@ Amazon.com | US, VA, Virtual Location - Virginia

@ CI&T | Brazil

### Software Engineer, Machine Learning, Planner/Behavior Prediction

@ Nuro, Inc. | Mountain View, California (HQ)