Feb. 21, 2024, 5:48 a.m. | Guan-Ting Lin, Cheng-Han Chiang, Hung-yi Lee

cs.CL updates on arXiv.org arxiv.org

arXiv:2402.12786v1 Announce Type: new
Abstract: In spoken dialogue, even if two current turns are the same sentence, their responses might still differ when they are spoken in different styles. The spoken styles, containing paralinguistic and prosodic information, mark the most significant difference between text and speech modality. When using text-only LLMs to model spoken dialogue, text-only LLMs cannot give different responses based on the speaking style of the current turn. In this paper, we focus on enabling LLMs to listen …

abstract arxiv conversations cs.cl current dialogue difference eess.as information language language models large language large language models responses speaking spoken text type

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote