March 11, 2024, 4:47 a.m. | Aru Maekawa, Tsutomu Hirao, Hidetaka Kamigaito, Manabu Okumura

cs.CL updates on arXiv.org arxiv.org

arXiv:2403.05065v1 Announce Type: new
Abstract: Recently, decoder-only pre-trained large language models (LLMs), with several tens of billion parameters, have significantly impacted a wide range of natural language processing (NLP) tasks. While encoder-only or encoder-decoder pre-trained language models have already proved to be effective in discourse parsing, the extent to which LLMs can perform this task remains an open research question. Therefore, this paper explores how beneficial such LLMs are for Rhetorical Structure Theory (RST) discourse parsing. Here, the parsing process …

abstract arxiv billion cs.cl decoder discourse encoder encoder-decoder language language models language processing large language large language models llms natural natural language natural language processing nlp parameters parsing processing success tasks type

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne