June 24, 2024, 4:41 a.m. | Varsha Suresh, Salah A\"it-Mokhtar, Caroline Brun, Ioan Calapodescu

cs.CL updates on arXiv.org arxiv.org

arXiv:2406.14747v1 Announce Type: new
Abstract: Self-supervised learning models have revolutionized the field of speech processing. However, the process of fine-tuning these models on downstream tasks requires substantial computational resources, particularly when dealing with multiple speech-processing tasks. In this paper, we explore the potential of adapter-based fine-tuning in developing a unified model capable of effectively handling multiple spoken language processing tasks. The tasks we investigate are Automatic Speech Recognition, Phoneme Recognition, Intent Classification, Slot Filling, and Spoken Emotion Recognition. We validate …

abstract adapter arxiv computational cs.ai cs.cl explore fine-tuning however language language processing multiple paper potential process processing resources self-supervised learning speech speech processing spoken supervised learning tasks tuning type unified model

AI Focused Biochemistry Postdoctoral Fellow

@ Lawrence Berkeley National Lab | Berkeley, CA

Senior Quality Specialist - JAVA

@ SAP | Bengaluru, IN, 560066

Aktuar Financial Lines (m/w/d)

@ Zurich Insurance | Köln, DE

Senior Network Engineer

@ ManTech | 054H - 124TchnlgyPrkWy,SBurlington,VT

Pricing Analyst

@ EDF | Exeter, GB

Specialist IS Engineer

@ Amgen | US - California - Thousand Oaks - Field/Remote