June 6, 2022, 1:11 a.m. | Michael T. Lash

stat.ML updates on arXiv.org arxiv.org

The use of machine learning (ML) models in decision-making contexts,
particularly those used in high-stakes decision-making, are fraught with issue
and peril since a person - not a machine - must ultimately be held accountable
for the consequences of the decisions made using such systems. Machine learning
explainability (MLX) promises to provide decision-makers with
prediction-specific rationale, assuring them that the model-elicited
predictions are made for the right reasons and are thus reliable. Few works
explicitly consider this key human-in-the-loop (HITL) …

arxiv explainability hex human learning loop reinforcement reinforcement learning

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US