May 16, 2024, 4:41 a.m. | Chendi Wang, Yuqing Zhu, Weijie J. Su, Yu-Xiang Wang

cs.LG updates on arXiv.org arxiv.org

arXiv:2405.08920v1 Announce Type: new
Abstract: A recent study by De et al. (2022) has reported that large-scale representation learning through pre-training on a public dataset significantly enhances differentially private (DP) learning in downstream tasks, despite the high dimensionality of the feature space. To theoretically explain this phenomenon, we consider the setting of a layer-peeled model in representation learning, which results in interesting phenomena related to learned features in deep learning and transfer learning, known as Neural Collapse (NC).
Within the …

abstract arxiv cs.cr cs.cv cs.lg dataset differential differential privacy dimensionality feature near neural collapse pre-training privacy public representation representation learning scale space stat.ml study tasks through training type

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Intern - Robotics Industrial Engineer Summer 2024

@ Vitesco Technologies | Seguin, US