March 1, 2024, 5:44 a.m. | Kai Huang, Hanyun Yin, Heng Huang, Wei Gao

cs.LG updates on arXiv.org arxiv.org

arXiv:2309.13192v2 Announce Type: replace
Abstract: Fine-tuning is the most effective way of adapting pre-trained large language models (LLMs) to downstream applications. With the fast growth of LLM-enabled AI applications and democratization of open-souced LLMs, fine-tuning has become possible for non-expert individuals, but intensively performed LLM fine-tuning worldwide could result in significantly high energy consumption and carbon footprint, which may bring large environmental impact. Mitigating such environmental impact towards Green AI directly correlates to reducing the FLOPs of fine-tuning, but existing …

abstract ai applications applications arxiv backpropagation become cs.ai cs.lg democratization expert fine-tuning green green ai growth language language models large language large language models llm llms type via

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Data Engineer - Takealot Group (Takealot.com | Superbalist.com | Mr D Food)

@ takealot.com | Cape Town