Web: http://arxiv.org/abs/2202.12183

June 23, 2022, 1:11 a.m. | Zi-Hao Qiu, Quanqi Hu, Yongjian Zhong, Lijun Zhang, Tianbao Yang

cs.LG updates on arXiv.org arxiv.org

NDCG, namely Normalized Discounted Cumulative Gain, is a widely used ranking
metric in information retrieval and machine learning. However, efficient and
provable stochastic methods for maximizing NDCG are still lacking, especially
for deep models. In this paper, we propose a principled approach to optimize
NDCG and its top-$K$ variant. First, we formulate a novel compositional
optimization problem for optimizing the NDCG surrogate, and a novel bilevel
compositional optimization problem for optimizing the top-$K$ NDCG surrogate.
Then, we develop efficient stochastic …

arxiv convergence deep deep learning learning lg optimization scale stochastic

More from arxiv.org / cs.LG updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY