all AI news
Optimal Comparator Adaptive Online Learning with Switching Cost. (arXiv:2205.06846v3 [cs.LG] UPDATED)
Oct. 13, 2022, 1:13 a.m. | Zhiyu Zhang, Ashok Cutkosky, Ioannis Ch. Paschalidis
cs.LG updates on arXiv.org arxiv.org
Practical online learning tasks are often naturally defined on unconstrained
domains, where optimal algorithms for general convex losses are characterized
by the notion of comparator adaptivity. In this paper, we design such
algorithms in the presence of switching cost - the latter penalizes the typical
optimism in adaptive algorithms, leading to a delicate design trade-off. Based
on a novel dual space scaling strategy discovered by a continuous-time
analysis, we propose a simple algorithm that improves the existing comparator
adaptive regret …
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Program Control Data Analyst
@ Ford Motor Company | Mexico
Vice President, Business Intelligence / Data & Analytics
@ AlphaSense | Remote - United States