June 11, 2024, 4:47 a.m. | Yunhe Gao, Difei Gu, Mu Zhou, Dimitris Metaxas

cs.LG updates on arXiv.org arxiv.org

arXiv:2406.05596v1 Announce Type: cross
Abstract: Although explainability is essential in the clinical diagnosis, most deep learning models still function as black boxes without elucidating their decision-making process. In this study, we investigate the explainable model development that can mimic the decision-making process of human experts by fusing the domain knowledge of explicit diagnostic criteria. We introduce a simple yet effective framework, Explicd, towards Explainable language-informed criteria-based diagnosis. Explicd initiates its process by querying domain knowledge from either large language models …

abstract arxiv black boxes classification clinical concepts cs.cv cs.lg decision deep learning development diagnosis domain experts explainability function human image knowledge making medical model development process study type visual visual concepts

Senior Data Engineer

@ Displate | Warsaw

Junior Data Analyst - ESG Data

@ Institutional Shareholder Services | Mumbai

Intern Data Driven Development in Sensor Fusion for Autonomous Driving (f/m/x)

@ BMW Group | Munich, DE

Senior MLOps Engineer, Machine Learning Platform

@ GetYourGuide | Berlin

Data Engineer, Analytics

@ Meta | Menlo Park, CA

Data Engineer

@ Meta | Menlo Park, CA