all AI news
Additive Margin in Contrastive Self-Supervised Frameworks to Learn Discriminative Speaker Representations
April 24, 2024, 4:42 a.m. | Theo Lepage, Reda Dehak
cs.LG updates on arXiv.org arxiv.org
Abstract: Self-Supervised Learning (SSL) frameworks became the standard for learning robust class representations by benefiting from large unlabeled datasets. For Speaker Verification (SV), most SSL systems rely on contrastive-based loss functions. We explore different ways to improve the performance of these techniques by revisiting the NT-Xent contrastive loss. Our main contribution is the definition of the NT-Xent-AM loss and the study of the importance of Additive Margin (AM) in SimCLR and MoCo SSL methods to further …
abstract arxiv class cs.lg cs.sd datasets eess.as explore frameworks functions learn loss performance robust self-supervised learning speaker ssl standard supervised learning systems type verification
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
DevOps Engineer (Data Team)
@ Reward Gateway | Sofia/Plovdiv