March 26, 2024, 4:42 a.m. | Md Abdul Kadir, GowthamKrishna Addluri, Daniel Sonntag

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.16569v1 Announce Type: new
Abstract: Explainable Artificial Intelligence (XAI) strategies play a crucial part in increasing the understanding and trustworthiness of neural networks. Nonetheless, these techniques could potentially generate misleading explanations. Blinding attacks can drastically alter a machine learning algorithm's prediction and explanation, providing misleading information by adding visually unnoticeable artifacts into the input, while maintaining the model's accuracy. It poses a serious challenge in ensuring the reliability of XAI methods. To ensure the reliability of XAI methods poses a …

abstract algorithm artificial artificial intelligence arxiv attacks cs.cr cs.cv cs.lg defense explainable artificial intelligence generate information intelligence machine machine learning networks neural networks part prediction strategies type understanding vulnerabilities xai

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Business Data Analyst

@ Alstom | Johannesburg, GT, ZA