all AI news
Improving Adaptive Online Learning Using Refined Discretization
Feb. 23, 2024, 5:43 a.m. | Zhiyu Zhang, Heng Yang, Ashok Cutkosky, Ioannis Ch. Paschalidis
cs.LG updates on arXiv.org arxiv.org
Abstract: We study unconstrained Online Linear Optimization with Lipschitz losses. Motivated by the pursuit of instance optimality, we propose a new algorithm that simultaneously achieves ($i$) the AdaGrad-style second order gradient adaptivity; and ($ii$) the comparator norm adaptivity also known as "parameter freeness" in the literature. In particular,
- our algorithm does not employ the impractical doubling trick, and does not require an a priori estimate of the time-uniform Lipschitz constant;
- the associated regret bound …
abstract algorithm arxiv cs.lg gradient instance linear literature losses norm online learning optimization stat.ml study style type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Data Scientist
@ Publicis Groupe | New York City, United States
Bigdata Cloud Developer - Spark - Assistant Manager
@ State Street | Hyderabad, India