all AI news
FinGPT-HPC: Efficient Pretraining and Finetuning Large Language Models for Financial Applications with High-Performance Computing
Feb. 22, 2024, 5:41 a.m. | Xiao-Yang Liu, Jie Zhang, Guoxuan Wang, Weiqing Tong, Anwar Walid
cs.LG updates on arXiv.org arxiv.org
Abstract: Large language models (LLMs) are computationally intensive. The computation workload and the memory footprint grow quadratically with the dimension (layer width). Most of LLMs' parameters come from the linear layers of the transformer structure and are highly redundant. These linear layers contribute more than 80% of the computation workload and 99% of the model size. To pretrain and finetune LLMs efficiently, there are three major challenges to address: 1) reducing redundancy of the linear layers; …
abstract applications arxiv computation computing cs.ai cs.cl cs.dc cs.lg financial financial applications finetuning fingpt hpc language language models large language large language models layer linear llms memory parameters performance pretraining transformer type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Senior Machine Learning Engineer
@ GPTZero | Toronto, Canada
ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)
@ HelloBetter | Remote
Werkstudent Data Architecture & Governance (w/m/d)
@ E.ON | Essen, DE
Data Architect, Data Lake, Professional Services
@ Amazon.com | Bogota, DC, COL
Data Architect, Data Lake, Professional Services
@ Amazon.com | Buenos Aires City, Buenos Aires Autonomous City, ARG
Data Architect
@ Bitful | United States - Remote