all AI news
Universal Online Convex Optimization with Minimax Optimal Second-Order Dynamic Regret. (arXiv:1907.00497v2 [math.OC] UPDATED)
Web: http://arxiv.org/abs/1907.00497
Jan. 31, 2022, 2:11 a.m. | Hakan Gokcesu, Suleyman S. Kozat
cs.LG updates on arXiv.org arxiv.org
We introduce an online convex optimization algorithm using projected
subgradient descent with optimal adaptive learning rates, with sequential and
efficient first-order updates. Our method provides a subgradient adaptive
minimax optimal dynamic regret guarantee for a sequence of general convex
functions with no known additional properties such as strong-convexity,
smoothness, exp-concavity or even Lipschitz-continuity. The guarantee is
against any comparator decision sequence with bounded "complexity", defined by
the cumulative distance traveled via changes between successive decisions. We
show optimality by generating …
More from arxiv.org / cs.LG updates on arXiv.org
Latest AI/ML/Big Data Jobs
Director, Data Science (Advocacy & Nonprofit)
@ Civis Analytics | Remote
Data Engineer
@ Rappi | [CO] Bogotá
Data Scientist V, Marketplaces Personalization (Remote)
@ ID.me | United States (U.S.)
Product OPs Data Analyst (Flex/Remote)
@ Scaleway | Paris
Big Data Engineer
@ Risk Focus | Riga, Riga, Latvia
Internship Program: Machine Learning Backend
@ Nextail | Remote job