all AI news
Combining Contrastive and Non-Contrastive Losses for Fine-Tuning Pretrained Models in Speech Analysis. (arXiv:2211.01964v1 [cs.CL])
Nov. 4, 2022, 1:16 a.m. | Florian Lux, Ching-Yi Chen, Ngoc Thang Vu
cs.CL updates on arXiv.org arxiv.org
Embedding paralinguistic properties is a challenging task as there are only a
few hours of training data available for domains such as emotional speech. One
solution to this problem is to pretrain a general self-supervised speech
representation model on large amounts of unlabeled speech. This pretrained
model is then finetuned to a specific task. Paralinguistic properties however
have notoriously high class variance, making the finetuning ineffective. In
this work, we propose a two step approach to this. First we improve …
More from arxiv.org / cs.CL updates on arXiv.org
ALBA: Adaptive Language-based Assessments for Mental Health
2 days, 11 hours ago |
arxiv.org
PACE: Improving Prompt with Actor-Critic Editing for Large Language Model
2 days, 11 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US