all AI news
Large-scale Stochastic Optimization of NDCG Surrogates for Deep Learning with Provable Convergence. (arXiv:2202.12183v3 [cs.LG] UPDATED)
cs.LG updates on arXiv.org arxiv.org
NDCG, namely Normalized Discounted Cumulative Gain, is a widely used ranking
metric in information retrieval and machine learning. However, efficient and
provable stochastic methods for maximizing NDCG are still lacking, especially
for deep models. In this paper, we propose a principled approach to optimize
NDCG and its top-$K$ variant. First, we formulate a novel compositional
optimization problem for optimizing the NDCG surrogate, and a novel bilevel
compositional optimization problem for optimizing the top-$K$ NDCG surrogate.
Then, we develop efficient stochastic …
arxiv convergence deep learning learning lg optimization scale stochastic