March 25, 2024, 4:42 a.m. | Philine Bommer, Marlene Kretschmer, Anna Hedstr\"om, Dilyara Bareeva, Marina M. -C. H\"ohne

cs.LG updates on arXiv.org arxiv.org

arXiv:2303.00652v2 Announce Type: replace
Abstract: Explainable artificial intelligence (XAI) methods shed light on the predictions of machine learning algorithms. Several different approaches exist and have already been applied in climate science. However, usually missing ground truth explanations complicate their evaluation and comparison, subsequently impeding the choice of the XAI method. Therefore, in this work, we introduce XAI evaluation in the climate context and discuss different desired explanation properties, namely robustness, faithfulness, randomization, complexity, and localization. To this end, we chose …

abstract algorithms artificial artificial intelligence arxiv climate climate science cs.ai cs.lg evaluation explainable ai explainable artificial intelligence guide however intelligence light machine machine learning machine learning algorithms predictions ranking science truth type xai

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York