May 5, 2022, 1:12 a.m. | Kai-Wei Chang, Wei-Cheng Tseng, Shang-Wen Li, Hung-yi Lee

cs.LG updates on arXiv.org arxiv.org

Speech representations learned from Self-supervised learning (SSL) models can
benefit various speech processing tasks. However, utilizing SSL representations
usually requires fine-tuning the pre-trained models or designing task-specific
downstream models and loss functions, causing much memory usage and human
labor. Recently, prompting in Natural Language Processing (NLP) has been found
to be an efficient technique to leverage pre-trained language models (LMs).
Specifically, prompt tuning optimizes a limited number of task-specific
parameters with a fixed pre-trained model; as a result, only a …

arxiv exploration language language model processing speech speech processing

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote