all AI news
DaisyRec 2.0: Benchmarking Recommendation for Rigorous Evaluation. (arXiv:2206.10848v1 [cs.IR])
Web: http://arxiv.org/abs/2206.10848
June 23, 2022, 1:10 a.m. | Zhu Sun, Hui Fang, Jie Yang, Xinghua Qu, Hongyang Liu, Di Yu, Yew-Soon Ong, Jie Zhang
cs.LG updates on arXiv.org arxiv.org
Recently, one critical issue looms large in the field of recommender systems
-- there are no effective benchmarks for rigorous evaluation -- which
consequently leads to unreproducible evaluation and unfair comparison. We,
therefore, conduct studies from the perspectives of practical theory and
experiments, aiming at benchmarking recommendation for rigorous evaluation.
Regarding the theoretical study, a series of hyper-factors affecting
recommendation performance throughout the whole evaluation chain are
systematically summarized and analyzed via an exhaustive review on 141 papers
published at …
More from arxiv.org / cs.LG updates on arXiv.org
Latest AI/ML/Big Data Jobs
Machine Learning Researcher - Saalfeld Lab
@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia
Project Director, Machine Learning in US Health
@ ideas42.org | Remote, US
Data Science Intern
@ NannyML | Remote
Machine Learning Engineer NLP/Speech
@ Play.ht | Remote
Research Scientist, 3D Reconstruction
@ Yembo | Remote, US
Clinical Assistant or Associate Professor of Management Science and Systems
@ University at Buffalo | Buffalo, NY