March 15, 2024, 4:48 a.m. | Chang Zong, Yuyan Chen, Weiming Lu, Jian Shao, Yueting Zhuang

cs.CL updates on arXiv.org arxiv.org

arXiv:2403.09131v1 Announce Type: new
Abstract: Large Language Models (LLMs) have demonstrated efficacy in various linguistic applications, including text summarization and controlled text generation. However, studies into their capacity of switching between styles via fine-tuning remain underexplored. This study concentrates on textual professionalism and introduces a novel methodology, named ProSwitch, which equips a language model with the ability to produce both professional and non-professional responses through knowledge-guided instruction tuning. ProSwitch unfolds across three phases: data preparation for gathering domain knowledge and …

abstract applications arxiv capacity controlled text generation cs.ai cs.cl fine-tuning generate however knowledge language language model language models large language large language models llms methodology model fine-tuning novel professional studies study summarization text text generation text summarization textual type via

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Data Engineer

@ Quantexa | Sydney, New South Wales, Australia

Staff Analytics Engineer

@ Warner Bros. Discovery | NY New York 230 Park Avenue South