April 26, 2024, 4:41 a.m. | Jonas Teufel, Pascal Friederich

cs.LG updates on arXiv.org arxiv.org

arXiv:2404.16532v1 Announce Type: new
Abstract: Beyond improving trust and validating model fairness, xAI practices also have the potential to recover valuable scientific insights in application domains where little to no prior human intuition exists. To that end, we propose a method to extract global concept explanations from the predictions of graph neural networks to develop a deeper understanding of the tasks underlying structure-property relationships. We identify concept explanations as dense clusters in the self-explaining Megan models subgraph latent space. For …

abstract application arxiv beyond concept cs.ai cs.lg domains extract fairness global graph graphs human improving insights intuition practices predictions prior scientific trust type xai

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US