March 8, 2024, 5:47 a.m. | Shangjian Yin, Peijie Huang, Yuhong Xu, Haojing Huang, Jiatian Chen

cs.CL updates on arXiv.org arxiv.org

arXiv:2403.04481v1 Announce Type: new
Abstract: This study marks a significant advancement by harnessing Large Language Models (LLMs) for multi-intent spoken language understanding (SLU), proposing a unique methodology that capitalizes on the generative power of LLMs within an SLU context. Our innovative technique reconfigures entity slots specifically for LLM application in multi-intent SLU environments and introduces the concept of Sub-Intent Instruction (SII), enhancing the dissection and interpretation of intricate, multi-intent communication within varied domains. The resultant datasets, dubbed LM-MixATIS and LM-MixSNIPS, …

abstract advancement application arxiv context cs.ai cs.cl generative language language model language models language understanding large language large language model large language models llm llms marks methodology power slu spoken spoken language understanding study type understanding

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York