Nov. 8, 2022, 2:11 a.m. | Patrick Kaiser, Christoph Kern, David Rügamer

cs.LG updates on arXiv.org arxiv.org

Both industry and academia have made considerable progress in developing
trustworthy and responsible machine learning (ML) systems. While critical
concepts like fairness and explainability are often addressed, the safety of
systems is typically not sufficiently taken into account. By viewing
data-driven decision systems as socio-technical systems, we draw on the
uncertainty in ML literature to show how fairML systems can also be safeML
systems. We posit that a fair model needs to be an uncertainty-aware model,
e.g. by drawing on …

arxiv data data-driven data-driven decisions decisions fair modeling predictive predictive modeling uncertainty

Senior Machine Learning Engineer

@ GPTZero | Toronto, Canada

ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)

@ HelloBetter | Remote

Doctoral Researcher (m/f/div) in Automated Processing of Bioimages

@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena

Seeking Developers and Engineers for AI T-Shirt Generator Project

@ Chevon Hicks | Remote

Senior Applied Data Scientist

@ dunnhumby | London

Principal Data Architect - Azure & Big Data

@ MGM Resorts International | Home Office - US, NV