Web: http://arxiv.org/abs/2201.11808

Jan. 31, 2022, 2:11 a.m. | Rassa Ghavami Modegh, Ahmad Salimi, Hamid R. Rabiee

cs.LG updates on arXiv.org arxiv.org

Despite the state-of-the-art performance of deep convolutional neural
networks, they are susceptible to bias and malfunction in unseen situations.
The complex computation behind their reasoning is not sufficiently
human-understandable to develop trust. External explainer methods have tried to
interpret the network decisions in a human-understandable way, but they are
accused of fallacies due to their assumptions and simplifications. On the other
side, the inherent self-interpretability of models, while being more robust to
the mentioned fallacies, cannot be applied to the …

arxiv attention convolutional neural networks cv knowledge networks neural neural networks

More from arxiv.org / cs.LG updates on arXiv.org

Data Analyst, Credit Risk

@ Stripe | US Remote

Senior Data Engineer

@ Snyk | Cluj, Romania, or Remote

Senior Software Engineer (C++), Autonomy Visualization

@ Nuro, Inc. | Mountain View, California (HQ)

Machine Learning Intern (January 2023)

@ Cohere | Toronto, Palo Alto, San Francisco, London

Senior Machine Learning Engineer, Reinforcement Learning, Personalization

@ Spotify | New York, NY

AWS Data Engineer

@ ProCogia | Seattle