Feb. 9, 2024, 5:42 a.m. | Pranav Kulkarni Andrew Chan Nithya Navarathna Skylar Chan Paul H. Yi Vishwa S. Parekh

cs.LG updates on arXiv.org arxiv.org

The proliferation of artificial intelligence (AI) in radiology has shed light on the risk of deep learning (DL) models exacerbating clinical biases towards vulnerable patient populations. While prior literature has focused on quantifying biases exhibited by trained DL models, demographically targeted adversarial bias attacks on DL models and its implication in the clinical environment remains an underexplored field of research in medical imaging. In this work, we demonstrate that demographically targeted label poisoning attacks can introduce adversarial underdiagnosis bias in …

adversarial artificial artificial intelligence attacks bias biases clinical cs.ai cs.cv cs.lg deep learning hidden intelligence light literature patient prior radiology risk vulnerable

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne