April 9, 2024, 4:41 a.m. | Yi Zhang, Jitao Sang

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.04814v1 Announce Type: new
Abstract: Fairness is critical for artificial intelligence systems, especially for those deployed in high-stakes applications such as hiring and justice. Existing efforts toward fairness in machine learning fairness require retraining or fine-tuning the neural network weights to meet the fairness criteria. However, this is often not feasible in practice for regular model users due to the inability to access and modify model weights. In this paper, we propose a more flexible fairness paradigm, Inference-Time Rule Eraser, …

abstract applications artificial artificial intelligence arxiv bias cs.ai cs.cy cs.lg eraser fairness fine-tuning hiring however inference intelligence justice machine machine learning network neural network retraining rules systems type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Business Intelligence Manager

@ Sanofi | Budapest

Principal Engineer, Data (Hybrid)

@ Homebase | Toronto, Ontario, Canada