all AI news
Class-wise Thresholding for Robust Out-of-Distribution Detection. (arXiv:2110.15292v3 [cs.LG] UPDATED)
July 4, 2022, 1:11 a.m. | Matteo Guarrera, Baihong Jin, Tung-Wei Lin, Maria Zuluaga, Yuxin Chen, Alberto Sangiovanni-Vincentelli
cs.LG updates on arXiv.org arxiv.org
We consider the problem of detecting OoD(Out-of-Distribution) input data when
using deep neural networks, and we propose a simple yet effective way to
improve the robustness of several popular OoD detection methods against label
shift. Our work is motivated by the observation that most existing OoD
detection algorithms consider all training/test data as a whole, regardless of
which class entry each input activates (inter-class differences). Through
extensive experimentation, we have found that such practice leads to a detector
whose performance …
More from arxiv.org / cs.LG updates on arXiv.org
Testing the Segment Anything Model on radiology data
1 day, 8 hours ago |
arxiv.org
Calorimeter shower superresolution
1 day, 8 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US