all AI news
One Explanation Does Not Fit XIL. (arXiv:2304.07136v1 [cs.LG])
cs.LG updates on arXiv.org arxiv.org
Current machine learning models produce outstanding results in many areas
but, at the same time, suffer from shortcut learning and spurious correlations.
To address such flaws, the explanatory interactive machine learning (XIL)
framework has been proposed to revise a model by employing user feedback on a
model's explanation. This work sheds light on the explanations used within this
framework. In particular, we investigate simultaneous model revision through
multiple explanation methods. To this end, we identified that \textit{one
explanation does not …
arxiv feedback flaws framework interactive light machine machine learning machine learning models multiple shortcut user feedback work