Jan. 3, 2022, 2:10 a.m. | Juan Manuel Mayor-Torres, Sara Medina-DeVilliers, Tessa Clarkson, Matthew D. Lerner, Giuseppe Riccardi

cs.LG updates on arXiv.org arxiv.org

Current models on Explainable Artificial Intelligence (XAI) have shown an
evident and quantified lack of reliability for measuring feature-relevance when
statistically entangled features are proposed for training deep classifiers.
There has been an increase in the application of Deep Learning in clinical
trials to predict early diagnosis of neuro-developmental disorders, such as
Autism Spectrum Disorder (ASD). However, the inclusion of more reliable
saliency-maps to obtain more trustworthy and interpretable metrics using neural
activity features is still insufficiently mature for practical …

algorithms arxiv case study deep learning deep learning algorithms interpretability learning study

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Lead Software Engineer - Artificial Intelligence, LLM

@ OpenText | Hyderabad, TG, IN

Lead Software Engineer- Python Data Engineer

@ JPMorgan Chase & Co. | GLASGOW, LANARKSHIRE, United Kingdom

Data Analyst (m/w/d)

@ Collaboration Betters The World | Berlin, Germany

Data Engineer, Quality Assurance

@ Informa Group Plc. | Boulder, CO, United States

Director, Data Science - Marketing

@ Dropbox | Remote - Canada