Feb. 14, 2024, 5:44 a.m. | Hubert Baniecki Przemyslaw Biecek

cs.LG updates on arXiv.org arxiv.org

Explainable artificial intelligence (XAI) methods are portrayed as a remedy for debugging and trusting statistical and deep learning models, as well as interpreting their predictions. However, recent advances in adversarial machine learning (AdvML) highlight the limitations and vulnerabilities of state-of-the-art explanation methods, putting their security and trustworthiness into question. The possibility of manipulating, fooling or fairwashing evidence of the model's reasoning has detrimental consequences when applied in high-stakes decision-making and knowledge discovery. This survey provides a comprehensive overview of research …

advances adversarial adversarial attacks adversarial machine learning art artificial artificial intelligence attacks cs.ai cs.cr cs.cv cs.lg debugging deep learning explainable artificial intelligence highlight intelligence limitations machine machine learning predictions question security state statistical survey vulnerabilities xai

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne