June 9, 2022, 1:12 a.m. | Xinzhe Han, Shuhui Wang, Chi Su, Qingming Huang, Qi Tian

cs.CV updates on arXiv.org arxiv.org

Neural networks often make predictions relying on the spurious correlations
from the datasets rather than the intrinsic properties of the task of interest,
facing sharp degradation on out-of-distribution (OOD) test data. Existing
de-bias learning frameworks try to capture specific dataset bias by annotations
but they fail to handle complicated OOD scenarios. Others implicitly identify
the dataset bias by special design low capability biased models or losses, but
they degrade when the training and testing data are from the same distribution. …

arxiv bias de-bias general learning lg

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US