Feb. 7, 2024, 5:44 a.m. | Raphael Joud Pierre-Alain Moellic Simon Pontie Jean-Baptiste Rigaud

cs.LG updates on arXiv.org arxiv.org

Model extraction is a growing concern for the security of AI systems. For deep neural network models, the architecture is the most important information an adversary aims to recover. Being a sequence of repeated computation blocks, neural network models deployed on edge-devices will generate distinctive side-channel leakages. The latter can be exploited to extract critical information when targeted platforms are physically accessible. By combining theoretical knowledge about deep learning practices and analysis of a widespread implementation library (ARM CMSIS-NN), our …

ai systems analysis architecture book computation cs.ai cs.cr cs.lg deep neural network devices edge extraction generate information microcontrollers network network architecture neural network power security security of ai simple systems will

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Software Engineer, Generative AI (C++)

@ SoundHound Inc. | Toronto, Canada