April 12, 2024, 4:45 a.m. | Nadieh Khalili, Joey Spronck, Francesco Ciompi, Jeroen van der Laak, Geert Litjens

cs.CV updates on arXiv.org arxiv.org

arXiv:2404.07208v1 Announce Type: new
Abstract: Deep learning algorithms, often critiqued for their 'black box' nature, traditionally fall short in providing the necessary transparency for trusted clinical use. This challenge is particularly evident when such models are deployed in local hospitals, encountering out-of-domain distributions due to varying imaging techniques and patient-specific pathologies. Yet, this limitation offers a unique avenue for continual learning. The Uncertainty-Guided Annotation (UGA) framework introduces a human-in-the-loop approach, enabling AI to convey its uncertainties to clinicians, effectively acting …

abstract algorithms annotation arxiv black box box challenge clinical cs.ai cs.cv cs.hc deep learning deep learning algorithms domain hospitals human imaging loop nature patient segmentation transparency type uncertainty

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US