all AI news
Evaluation Gaps in Machine Learning Practice. (arXiv:2205.05256v1 [cs.LG])
May 12, 2022, 1:11 a.m. | Ben Hutchinson, Negar Rostamzadeh, Christina Greer, Katherine Heller, Vinodkumar Prabhakaran
cs.LG updates on arXiv.org arxiv.org
Forming a reliable judgement of a machine learning (ML) model's
appropriateness for an application ecosystem is critical for its responsible
use, and requires considering a broad range of factors including harms,
benefits, and responsibilities. In practice, however, evaluations of ML models
frequently focus on only a narrow range of decontextualized predictive
behaviours. We examine the evaluation gaps between the idealized breadth of
evaluation concerns and the observed narrow focus of actual evaluations.
Through an empirical study of papers from recent …
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Scientist (m/f/x/d)
@ Symanto Research GmbH & Co. KG | Spain, Germany
AI Scientist/Engineer
@ OKX | Singapore
Research Engineering/ Scientist Associate I
@ The University of Texas at Austin | AUSTIN, TX
Senior Data Engineer
@ Algolia | London, England
Fundamental Equities - Vice President, Equity Quant Research Analyst (Income & Value Investment Team)
@ BlackRock | NY7 - 50 Hudson Yards, New York
Snowflake Data Analytics
@ Devoteam | Madrid, Spain