May 23, 2022, 1:11 a.m. | Zhiyu Zhang, Ashok Cutkosky, Ioannis Ch. Paschalidis

cs.LG updates on arXiv.org arxiv.org

Parameter-freeness in online learning refers to the adaptivity of an
algorithm with respect to the optimal decision in hindsight. In this paper, we
design such algorithms in the presence of switching cost - the latter penalizes
the optimistic updates required by parameter-freeness, leading to a delicate
design trade-off. Based on a novel dual space scaling strategy, we propose a
simple yet powerful algorithm for Online Linear Optimization (OLO) with
switching cost, which improves the existing suboptimal regret bound [ZCP22a] to …

arxiv cost free learning online learning

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Principal Machine Learning Engineer (AI, NLP, LLM, Generative AI)

@ Palo Alto Networks | Santa Clara, CA, United States

Consultant Senior Data Engineer F/H

@ Devoteam | Nantes, France