all AI news
Investigating the Robustness of Counterfactual Learning to Rank Models: A Reproducibility Study
April 8, 2024, 4:42 a.m. | Zechun Niu, Jiaxin Mao, Qingyao Ai, Ji-Rong Wen
cs.LG updates on arXiv.org arxiv.org
Abstract: Counterfactual learning to rank (CLTR) has attracted extensive attention in the IR community for its ability to leverage massive logged user interaction data to train ranking models. While the CLTR models can be theoretically unbiased when the user behavior assumption is correct and the propensity estimation is accurate, their effectiveness is usually empirically evaluated via simulation-based experiments due to a lack of widely-available, large-scale, real click logs. However, the mainstream simulation-based experiments are somewhat limited …
abstract arxiv attention behavior community counterfactual cs.ai cs.ir cs.lg data massive ranking reproducibility robustness study train type unbiased
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Senior Data Scientist
@ ITE Management | New York City, United States