June 16, 2024, 7:27 a.m. | /u/alexsht1

Machine Learning www.reddit.com

Some time ago I read a paper about the so-called *tilted empirical risk minimization*, and later a JMLR paper from the same authors: [https://www.jmlr.org/papers/v24/21-1095.html](https://www.jmlr.org/papers/v24/21-1095.html)

Such a formulation allows us to train in a manner that is more 'fair' towards the difficult samples, or conversely, less sensitive to these difficult samples if they are actually outliers. But minimizing it is numerically challenging. So I decided to try and devise a remedy in a blog post. I think it's an interesting trick …

blog fair machinelearning outliers samples think train trick

AI Focused Biochemistry Postdoctoral Fellow

@ Lawrence Berkeley National Lab | Berkeley, CA

Senior Data Engineer

@ Displate | Warsaw

Solutions Engineer

@ Stability AI | United States

Lead BizOps Engineer

@ Mastercard | O'Fallon, Missouri (Main Campus)

Senior Solution Architect

@ Cognite | Kuala Lumpur

Senior Front-end Engineer

@ Cognite | Bengaluru