Feb. 20, 2024, 5:43 a.m. | Lars Nieradzik, Henrike Stephani, J\"ordis Sieburg-Rockel, Stephanie Helmling, Andrea Olbrich, Janis Keuper

cs.LG updates on arXiv.org arxiv.org

arXiv:2402.11670v1 Announce Type: cross
Abstract: In this study, we explore the explainability of neural networks in agriculture and forestry, specifically in fertilizer treatment classification and wood identification. The opaque nature of these models, often considered 'black boxes', is addressed through an extensive evaluation of state-of-the-art Attribution Maps (AMs), also known as class activation maps (CAMs) or saliency maps. Our comprehensive qualitative and quantitative analysis of these AMs uncovers critical practical limitations. Findings reveal that AMs frequently fail to consistently highlight …

abstract agriculture applications arxiv attribution black box black boxes box classification cnn cs.cv cs.lg evaluation explainability explore identification maps nature networks neural networks study through treatment type

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

RL Analytics - Content, Data Science Manager

@ Meta | Burlingame, CA

Research Engineer

@ BASF | Houston, TX, US, 77079