Web: http://arxiv.org/abs/2204.00480

Sept. 23, 2022, 1:12 a.m. | Hazem Fahmy, Fabrizio Pastore, Lionel Briand, Thomas Stifter

cs.LG updates on arXiv.org arxiv.org

When Deep Neural Networks (DNNs) are used in safety-critical systems,
engineers should determine the safety risks associated with failures (i.e.,
erroneous outputs) observed during testing. For DNNs processing images,
engineers visually inspect all failure-inducing images to determine common
characteristics among them. Such characteristics correspond to
hazard-triggering events (e.g., low illumination) that are essential inputs for
safety analysis. Though informative, such activity is expensive and
error-prone.


To support such safety analysis practices, we propose SEDE, a technique that
generates readable descriptions …

arxiv debugging dnn events safety safety-critical systems

More from arxiv.org / cs.LG updates on arXiv.org

Machine Learning Product Manager (Canada, Remote)

@ FreshBooks | Canada

Data Engineer

@ Amazon.com | Irvine, California, USA

Senior Autonomy Behavior II, Performance Assessment Engineer

@ Cruise LLC | San Francisco, CA

Senior Data Analytics Engineer

@ Intercom | Dublin, Ireland

Data Analyst Intern

@ ADDX | Singapore

Data Science Analyst - Consumer

@ Yelp | London, England, United Kingdom

Senior Data Analyst - Python+Hadoop

@ Capco | India - Bengaluru

DevOps Engineer, Data Team

@ SingleStore | Hyderabad, India

Software Engineer (Machine Learning, AI Platform)

@ Phaidra | Remote

Sr. UI/UX Designer - Artificial Intelligence (ID:1213)

@ Truelogic Software | Remote, anywhere in LATAM

Analytics Engineer

@ carwow | London, England, United Kingdom

HRIS Data Analyst

@ SecurityScorecard | Remote