all AI news
Human Latency Conversational Turns for Spoken Avatar Systems
April 26, 2024, 4:47 a.m. | Derek Jacoby, Tianyi Zhang, Aanchan Mohan, Yvonne Coady
cs.CL updates on arXiv.org arxiv.org
Abstract: A problem with many current Large Language Model (LLM) driven spoken dialogues is the response time. Some efforts such as Groq address this issue by lightning fast processing of the LLM, but we know from the cognitive psychology literature that in human-to-human dialogue often responses occur prior to the speaker completing their utterance. No amount of delay for LLM processing is acceptable if we wish to maintain human dialogue latencies. In this paper, we discuss …
abstract arxiv avatar cognitive conversational cs.ai cs.cl cs.hc current dialogue groq human issue language language model large language large language model latency lightning literature llm prior processing psychology responses spoken systems type
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Data Scientist (Database Development)
@ Nasdaq | Bengaluru-Affluence