all AI news
Optimal Parameter-free Online Learning with Switching Cost. (arXiv:2205.06846v2 [cs.LG] UPDATED)
May 23, 2022, 1:11 a.m. | Zhiyu Zhang, Ashok Cutkosky, Ioannis Ch. Paschalidis
cs.LG updates on arXiv.org arxiv.org
Parameter-freeness in online learning refers to the adaptivity of an
algorithm with respect to the optimal decision in hindsight. In this paper, we
design such algorithms in the presence of switching cost - the latter penalizes
the optimistic updates required by parameter-freeness, leading to a delicate
design trade-off. Based on a novel dual space scaling strategy, we propose a
simple yet powerful algorithm for Online Linear Optimization (OLO) with
switching cost, which improves the existing suboptimal regret bound [ZCP22a] to …
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Principal Machine Learning Engineer (AI, NLP, LLM, Generative AI)
@ Palo Alto Networks | Santa Clara, CA, United States
Consultant Senior Data Engineer F/H
@ Devoteam | Nantes, France