all AI news
APPLE: Adversarial Privacy-aware Perturbations on Latent Embedding for Unfairness Mitigation
March 11, 2024, 4:45 a.m. | Zikang Xu, Fenghe Tang, Quan Quan, Qingsong Yao, S. Kevin Zhou
cs.CV updates on arXiv.org arxiv.org
Abstract: Ensuring fairness in deep-learning-based segmentors is crucial for health equity. Much effort has been dedicated to mitigating unfairness in the training datasets or procedures. However, with the increasing prevalence of foundation models in medical image analysis, it is hard to train fair models from scratch while preserving utility. In this paper, we propose a novel method, Adversarial Privacy-aware Perturbations on Latent Embedding (APPLE), that can improve the fairness of deployed segmentors by introducing a small …
abstract adversarial analysis apple arxiv cs.cv datasets embedding equity fair fairness foundation health health equity however image medical privacy scratch train training training datasets type
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US