all AI news
Can we obtain significant success in RST discourse parsing by using Large Language Models?
March 11, 2024, 4:47 a.m. | Aru Maekawa, Tsutomu Hirao, Hidetaka Kamigaito, Manabu Okumura
cs.CL updates on arXiv.org arxiv.org
Abstract: Recently, decoder-only pre-trained large language models (LLMs), with several tens of billion parameters, have significantly impacted a wide range of natural language processing (NLP) tasks. While encoder-only or encoder-decoder pre-trained language models have already proved to be effective in discourse parsing, the extent to which LLMs can perform this task remains an open research question. Therefore, this paper explores how beneficial such LLMs are for Rhetorical Structure Theory (RST) discourse parsing. Here, the parsing process …
abstract arxiv billion cs.cl decoder discourse encoder encoder-decoder language language models language processing large language large language models llms natural natural language natural language processing nlp parameters parsing processing success tasks type
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne