March 20, 2024, 4:43 a.m. | Dewei Zhang, Sam Davanloo Tajbakhsh

cs.LG updates on arXiv.org arxiv.org

arXiv:2207.09350v2 Announce Type: replace-cross
Abstract: This work considers optimization of composition of functions in a nested form over Riemannian manifolds where each function contains an expectation. This type of problems is gaining popularity in applications such as policy evaluation in reinforcement learning or model customization in meta-learning. The standard Riemannian stochastic gradient methods for non-compositional optimization cannot be directly applied as stochastic approximation of inner functions create bias in the gradients of the outer functions. For two-level composition optimization, we …

abstract applications arxiv cs.lg customization evaluation form function functions gradient math.oc meta meta-learning model customization optimization policy reinforcement reinforcement learning standard stochastic type work

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Data Engineer - Takealot Group (Takealot.com | Superbalist.com | Mr D Food)

@ takealot.com | Cape Town