all AI news
An Exploration of Prompt Tuning on Generative Spoken Language Model for Speech Processing Tasks. (arXiv:2203.16773v2 [eess.AS] UPDATED)
cs.LG updates on arXiv.org arxiv.org
Speech representations learned from Self-supervised learning (SSL) models can
benefit various speech processing tasks. However, utilizing SSL representations
usually requires fine-tuning the pre-trained models or designing task-specific
downstream models and loss functions, causing much memory usage and human
labor. Recently, prompting in Natural Language Processing (NLP) has been found
to be an efficient technique to leverage pre-trained language models (LMs).
Specifically, prompt tuning optimizes a limited number of task-specific
parameters with a fixed pre-trained model; as a result, only a …
arxiv exploration language language model processing speech speech processing