all AI news
When AUC meets DRO: Optimizing Partial AUC for Deep Learning with Non-Convex Convergence Guarantee. (arXiv:2203.00176v2 [cs.LG] UPDATED)
March 4, 2022, 2:12 a.m. | Dixian Zhu, Gang Li, Bokun Wang, Xiaodong Wu, Tianbao Yang
cs.LG updates on arXiv.org arxiv.org
In this paper, we propose systematic and efficient gradient-based methods for
both one-way and two-way partial AUC (pAUC) maximization that are applicable to
deep learning. We propose new formulations of pAUC surrogate objectives by
using the distributionally robust optimization (DRO) to define the loss for
each individual positive data. We consider two formulations of DRO, one of
which is based on conditional-value-at-risk (CVaR) that yields a non-smooth but
exact estimator for pAUC, and another one is based on a KL …
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote