Feb. 20, 2024, 5:43 a.m. | Lars Nieradzik, Henrike Stephani, J\"ordis Sieburg-Rockel, Stephanie Helmling, Andrea Olbrich, Janis Keuper

cs.LG updates on arXiv.org arxiv.org

arXiv:2402.11670v1 Announce Type: cross
Abstract: In this study, we explore the explainability of neural networks in agriculture and forestry, specifically in fertilizer treatment classification and wood identification. The opaque nature of these models, often considered 'black boxes', is addressed through an extensive evaluation of state-of-the-art Attribution Maps (AMs), also known as class activation maps (CAMs) or saliency maps. Our comprehensive qualitative and quantitative analysis of these AMs uncovers critical practical limitations. Findings reveal that AMs frequently fail to consistently highlight …

abstract agriculture applications arxiv attribution black box black boxes box classification cnn cs.cv cs.lg evaluation explainability explore identification maps nature networks neural networks study through treatment type

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US