March 21, 2024, 4:48 a.m. | Siyin Wang, Chao-Han Huck Yang, Ji Wu, Chao Zhang

cs.CL updates on arXiv.org arxiv.org

arXiv:2309.07081v2 Announce Type: replace-cross
Abstract: This paper investigates the in-context learning abilities of the Whisper automatic speech recognition (ASR) models released by OpenAI. A novel speech-based in-context learning (SICL) approach is proposed for test-time adaptation, which can reduce the word error rates (WERs) with only a small number of labelled speech samples without gradient descent. Language-level adaptation experiments using Chinese dialects showed that when applying SICL to isolated word ASR, consistent and considerable relative WER reductions can be achieved using …

abstract arxiv asr automatic speech recognition context cs.cl cs.sd eess.as error gradient in-context learning novel openai paper recognition reduce samples small speech speech recognition test type whisper word

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

#13721 - Data Engineer - AI Model Testing

@ Qualitest | Miami, Florida, United States

Elasticsearch Administrator

@ ManTech | 201BF - Customer Site, Chantilly, VA