all AI news
A gradient estimator via L1-randomization for online zero-order optimization with two point feedback. (arXiv:2205.13910v1 [math.ST])
May 30, 2022, 1:11 a.m. | Arya Akhavan, Evgenii Chzhen, Massimiliano Pontil, Alexandre B. Tsybakov
stat.ML updates on arXiv.org arxiv.org
This work studies online zero-order optimization of convex and Lipschitz
functions. We present a novel gradient estimator based on two function
evaluation and randomization on the $\ell_1$-sphere. Considering different
geometries of feasible sets and Lipschitz assumptions we analyse online mirror
descent algorithm with our estimator in place of the usual gradient. We
consider two types of assumptions on the noise of the zero-order oracle:
canceling noise and adversarial noise. We provide an anytime and completely
data-driven algorithm, which is adaptive …
More from arxiv.org / stat.ML updates on arXiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US