Jan. 31, 2024, 4:45 p.m. | David Almog, Romain Gauriot, Lionel Page, Daniel Martin

cs.LG updates on arXiv.org arxiv.org

Powered by the increasing predictive capabilities of machine learning
algorithms, artificial intelligence (AI) systems have begun to be used to
overrule human mistakes in many settings. We provide the first field evidence
this AI oversight carries psychological costs that can impact human
decision-making. We investigate one of the highest visibility settings in which
AI oversight has occurred: the Hawk-Eye review of umpires in top tennis
tournaments. We find that umpires lowered their overall mistake rate after the
introduction of Hawk-Eye …

algorithms artificial artificial intelligence arxiv begun capabilities centre costs court cs.lg decision evidence human impact intelligence machine machine learning machine learning algorithms making mistakes oversight predictive systems

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Data Science Analyst- ML/DL/LLM

@ Mayo Clinic | Jacksonville, FL, United States

Machine Learning Research Scientist, Robustness and Uncertainty

@ Nuro, Inc. | Mountain View, California (HQ)