all AI news
StreamingDialogue: Prolonged Dialogue Learning via Long Context Compression with Minimal Losses
March 14, 2024, 4:48 a.m. | Jia-Nan Li, Quan Tu, Cunli Mao, Zhengtao Yu, Ji-Rong Wen, Rui Yan
cs.CL updates on arXiv.org arxiv.org
Abstract: Standard Large Language Models (LLMs) struggle with handling dialogues with long contexts due to efficiency and consistency issues. According to our observation, dialogue contexts are highly structured, and the special token of \textit{End-of-Utterance} (EoU) in dialogues has the potential to aggregate information. We refer to the EoU tokens as ``conversational attention sinks'' (conv-attn sinks). Accordingly, we introduce StreamingDialogue, which compresses long dialogue history into conv-attn sinks with minimal losses, and thus reduces computational complexity quadratically …
abstract arxiv compression context cs.ai cs.cl dialogue efficiency information language language models large language large language models llms losses observation standard struggle token type via
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)
@ HelloBetter | Remote
Doctoral Researcher (m/f/div) in Automated Processing of Bioimages
@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena
Seeking Developers and Engineers for AI T-Shirt Generator Project
@ Chevon Hicks | Remote
Security Data Engineer
@ ASML | Veldhoven, Building 08, Netherlands
Data Engineer
@ Parsons Corporation | Pune - Business Bay
Data Engineer
@ Parsons Corporation | Bengaluru, Velankani Tech Park