all AI news
Cost-Performance Optimization for Processing Low-Resource Language Tasks Using Commercial LLMs
March 11, 2024, 4:47 a.m. | Arijit Nag, Animesh Mukherjee, Niloy Ganguly, Soumen Chakrabarti
cs.CL updates on arXiv.org arxiv.org
Abstract: Large Language Models (LLMs) exhibit impressive zero/few-shot inference and generation quality for high-resource languages(HRLs). A few of them have been trained in low-resource languages (LRLs) and give decent performance. Owing to the prohibitive costs of training LLMs, they are usually used as a network service, with the client charged by the count of input and output tokens. The number of tokens strongly depends on the script and language, as well as the LLM's sub-word vocabulary. …
abstract arxiv commercial cost costs cs.cl few-shot inference language language models languages large language large language models llms low network optimization performance processing quality service tasks them training training llms type
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Senior Data Science Analyst- ML/DL/LLM
@ Mayo Clinic | Jacksonville, FL, United States
Machine Learning Research Scientist, Robustness and Uncertainty
@ Nuro, Inc. | Mountain View, California (HQ)