April 23, 2024, 8:04 p.m. |

News on Artificial Intelligence and Machine Learning techxplore.com

We use computers to help us make (hopefully) unbiased decisions. The problem is that machine-learning algorithms do not always make fair classifications if human bias is embedded in the data used to train them—which is often the case in practice.

algorithms bias case classification computers computer sciences data decisions embedded fair framework human machine practice them train unbiased

More from techxplore.com / News on Artificial Intelligence and Machine Learning

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York