all AI news
Detection and Recovery Against Deep Neural Network Fault Injection Attacks Based on Contrastive Learning. (arXiv:2401.16766v1 [cs.LG])
cs.CV updates on arXiv.org arxiv.org
Deep Neural Network (DNN) models when implemented on executing devices as the
inference engines are susceptible to Fault Injection Attacks (FIAs) that
manipulate model parameters to disrupt inference execution with disastrous
performance. This work introduces Contrastive Learning (CL) of visual
representations i.e., a self-supervised learning approach into the deep
learning training and inference pipeline to implement DNN inference engines
with self-resilience under FIAs. Our proposed CL based FIA Detection and
Recovery (CFDR) framework features (i) real-time detection with only a …
arxiv attacks cs.lg deep neural network detection devices disrupt dnn inference network neural network parameters performance recovery visual work