Sept. 8, 2022, 1:12 a.m. | Johannes Haug, Alexander Braun, Stefan Zürn, Gjergji Kasneci

stat.ML updates on arXiv.org arxiv.org

As complex machine learning models are increasingly used in sensitive
applications like banking, trading or credit scoring, there is a growing demand
for reliable explanation mechanisms. Local feature attribution methods have
become a popular technique for post-hoc and model-agnostic explanations.
However, attribution methods typically assume a stationary environment in which
the predictive model has been trained and remains stable. As a result, it is
often unclear how local attributions behave in realistic, constantly evolving
settings such as streaming and online …

arxiv change data data streams detection explainability

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US