Jan. 31, 2024, 4:42 p.m. | Jens Henriksson, Christian Berger, Stig Ursing, Markus Borg

cs.CV updates on arXiv.org arxiv.org

Safety measures need to be systemically investigated to what extent they
evaluate the intended performance of Deep Neural Networks (DNNs) for critical
applications. Due to a lack of verification methods for high-dimensional DNNs,
a trade-off is needed between accepted performance and handling of
out-of-distribution (OOD) samples.


This work evaluates rejecting outputs from semantic segmentation DNNs by
applying a Mahalanobis distance (MD) based on the most probable
class-conditional Gaussian distribution for the predicted class as an OOD
score. The evaluation follows …

applications arxiv autonomous autonomous driving cs.lg datasets detection distribution driving evaluation networks neural networks performance safety safety measures samples trade trade-off verification work

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US