April 8, 2024, 4:42 a.m. | Zechun Niu, Jiaxin Mao, Qingyao Ai, Ji-Rong Wen

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.03707v1 Announce Type: new
Abstract: Counterfactual learning to rank (CLTR) has attracted extensive attention in the IR community for its ability to leverage massive logged user interaction data to train ranking models. While the CLTR models can be theoretically unbiased when the user behavior assumption is correct and the propensity estimation is accurate, their effectiveness is usually empirically evaluated via simulation-based experiments due to a lack of widely-available, large-scale, real click logs. However, the mainstream simulation-based experiments are somewhat limited …

abstract arxiv attention behavior community counterfactual cs.ai cs.ir cs.lg data massive ranking reproducibility robustness study train type unbiased

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Data Scientist

@ ITE Management | New York City, United States