all AI news
Black-box Safety Analysis and Retraining of DNNs based on Feature Extraction and Clustering. (arXiv:2201.05077v1 [cs.SE])
Jan. 14, 2022, 2:10 a.m. | Mohammed Oualid Attaoui, Hazem Fahmy, Fabrizio Pastore, Lionel Briand
cs.LG updates on arXiv.org arxiv.org
Deep neural networks (DNNs) have demonstrated superior performance over
classical machine learning to support many features in safety-critical systems.
Although DNNs are now widely used in such systems (e.g., self driving cars),
there is limited progress regarding automated support for functional safety
analysis in DNN-based systems. For example, the identification of root causes
of errors, to enable both risk analysis and DNN retraining, remains an open
problem. In this paper, we propose SAFE, a black-box approach to automatically
characterize the …
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Vice President, AI Product Manager
@ JPMorgan Chase & Co. | New York City, United States
Binance Accelerator Program - Data Engineer
@ Binance | Asia