all AI news
Towards Extremely Fast Bilevel Optimization with Self-governed Convergence Guarantees. (arXiv:2205.10054v1 [math.OC])
May 23, 2022, 1:10 a.m. | Risheng Liu, Xuan Liu, Wei Yao, Shangzhi Zeng, Jin Zhang
cs.LG updates on arXiv.org arxiv.org
Gradient methods have become mainstream techniques for Bi-Level Optimization
(BLO) in learning and vision fields. The validity of existing works heavily
relies on solving a series of approximation subproblems with extraordinarily
high accuracy. Unfortunately, to achieve the approximation accuracy requires
executing a large quantity of time-consuming iterations and computational
burden is naturally caused. This paper is thus devoted to address this critical
computational issue. In particular, we propose a single-level formulation to
uniformly understand existing explicit and implicit Gradient-based BLOs …
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Vice President, Data Science, Marketplace
@ Xometry | North Bethesda, Maryland, Lexington, KY, Remote
Field Solutions Developer IV, Generative AI, Google Cloud
@ Google | Toronto, ON, Canada; Atlanta, GA, USA