April 24, 2023, 12:44 a.m. | Wenqiao Zhang, Changshuo Liu, Lingze Zeng, Beng Chin Ooi, Siliang Tang, Yueting Zhuang

cs.LG updates on arXiv.org arxiv.org

Conventional multi-label classification (MLC) methods assume that all samples
are fully labeled and identically distributed. Unfortunately, this assumption
is unrealistic in large-scale MLC data that has long-tailed (LT) distribution
and partial labels (PL). To address the problem, we introduce a novel task,
Partial labeling and Long-Tailed Multi-Label Classification (PLT-MLC), to
jointly consider the above two imperfect learning environments. Not
surprisingly, we find that most LT-MLC and PL-MLC approaches fail to solve the
PLT-MLC, resulting in significant performance degradation on the …

arxiv benchmarks classification data distributed distribution environment environments labeling labels mlc novel performance scale

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Risk Models Methodology & IRB, Student in Nordea

@ Nordea | Stockholm, SE, 111 46