all AI news
Amharic LLaMA and LLaVA: Multimodal LLMs for Low Resource Languages
March 12, 2024, 4:51 a.m. | Michael Andersland
cs.CL updates on arXiv.org arxiv.org
Abstract: Large Language Models (LLMs) like GPT-4 and LLaMA have shown incredible proficiency at natural language processing tasks and have even begun to excel at tasks across other modalities such as vision and audio. Despite their success, LLMs often struggle to perform well on low-resource languages because there is so little training data available. This shortcoming is especially prevalent with open source models. In this work, we explore training LLaMA-2 to speak Amharic, a language which …
abstract arxiv audio begun cs.cl excel gpt gpt-4 language language models language processing languages large language large language models llama llava llms low multimodal multimodal llms natural natural language natural language processing processing struggle success tasks type vision
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US