Feb. 6, 2024, 5:42 a.m. | Le Chen Nesreen K. Ahmed Akash Dutta Arijit Bhattacharjee Sixing Yu Quazi Ishtiaque Mahmud Waqwoya Abe

cs.LG updates on arXiv.org arxiv.org

Recently, language models (LMs), especially large language models (LLMs), have revolutionized the field of deep learning. Both encoder-decoder models and prompt-based techniques have shown immense potential for natural language processing and code-based tasks. Over the past several years, many research labs and institutions have invested heavily in high-performance computing, approaching or breaching exascale performance levels. In this paper, we posit that adapting and utilizing such language model-based techniques for tasks in high-performance computing (HPC) would be very beneficial. This study …

challenges code computing cs.lg decoder deep learning encoder encoder-decoder hpc labs landscape language language models language processing large language large language models llms lms natural natural language natural language processing paper performance processing prompt research tasks

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne