April 2, 2024, 7:45 p.m. | Chenxi Whitehouse, Fantine Huot, Jasmijn Bastings, Mostafa Dehghani, Chu-Cheng Lin, Mirella Lapata

cs.LG updates on arXiv.org arxiv.org

arXiv:2311.08572v2 Announce Type: replace-cross
Abstract: Although the advancements of pre-trained Large Language Models have significantly accelerated recent progress in NLP, their ever-increasing size poses significant challenges for conventional fine-tuning, especially in memory-intensive tasks. We investigate the potential of Parameter-Efficient Fine-Tuning, focusing on Low-Rank Adaptation (LoRA), in the domain of multilingual summarization, a task that is both challenging (due to typically long inputs), and relatively unexplored. We conduct an extensive study across different data availability scenarios, including high- and low-data settings, …

abstract arxiv challenges cs.ai cs.cl cs.lg domain fine-tuning language language models large language large language models lora low low-rank adaptation memory multilingual nlp progress study summarization tasks type

Senior Machine Learning Engineer

@ GPTZero | Toronto, Canada

Sr. Data Operations

@ Carousell Group | West Jakarta, Indonesia

Senior Analyst, Business Intelligence & Reporting

@ Deutsche Bank | Bucharest

Business Intelligence Subject Matter Expert (SME) - Assistant Vice President

@ Deutsche Bank | Cary, 3000 CentreGreen Way

Enterprise Business Intelligence Specialist

@ NAIC | Kansas City

Senior Business Intelligence (BI) Developer - Associate

@ Deutsche Bank | Cary, 3000 CentreGreen Way