June 23, 2022, 1:12 a.m. | Zi-Hao Qiu, Quanqi Hu, Yongjian Zhong, Lijun Zhang, Tianbao Yang

stat.ML updates on arXiv.org arxiv.org

NDCG, namely Normalized Discounted Cumulative Gain, is a widely used ranking
metric in information retrieval and machine learning. However, efficient and
provable stochastic methods for maximizing NDCG are still lacking, especially
for deep models. In this paper, we propose a principled approach to optimize
NDCG and its top-$K$ variant. First, we formulate a novel compositional
optimization problem for optimizing the NDCG surrogate, and a novel bilevel
compositional optimization problem for optimizing the top-$K$ NDCG surrogate.
Then, we develop efficient stochastic …

arxiv convergence deep learning learning lg optimization scale stochastic

Data Scientist (m/f/x/d)

@ Symanto Research GmbH & Co. KG | Spain, Germany

Research Analyst

@ Cypris | Los Angeles, California, United States

Data Manager H/F

@ ASSYSTEM | Courbevoie, France

Software Engineer III - Java Scala BigData AWS

@ JPMorgan Chase & Co. | Bengaluru, Karnataka, India

Reference Data Specialist

@ JPMorgan Chase & Co. | Bengaluru, Karnataka, India

Data Visualization Manager

@ PatientPoint | Cincinnati, Ohio, United States