Feb. 15, 2024, 5:43 a.m. | Yuanyu Wan, Chang Yao, Mingli Song, Lijun Zhang

cs.LG updates on arXiv.org arxiv.org

arXiv:2305.12131v2 Announce Type: replace
Abstract: Online convex optimization (OCO) with arbitrary delays, in which gradients or other information of functions could be arbitrarily delayed, has received increasing attention recently. Different from previous studies that focus on stationary environments, this paper investigates the delayed OCO in non-stationary environments, and aims to minimize the dynamic regret with respect to any sequence of comparators. To this end, we first propose a simple algorithm, namely DOGD, which performs a gradient descent step for each …

abstract arxiv attention cs.lg dynamic environments focus functions information optimization paper studies type

Doctoral Researcher (m/f/div) in Automated Processing of Bioimages

@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena

Seeking Developers and Engineers for AI T-Shirt Generator Project

@ Chevon Hicks | Remote

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

AI Engineer

@ Holcim Group | Navi Mumbai, MH, IN, 400708