all AI news
A Review of Multi-Modal Large Language and Vision Models
April 3, 2024, 4:46 a.m. | Kilian Carolan, Laura Fennelly, Alan F. Smeaton
cs.CL updates on arXiv.org arxiv.org
Abstract: Large Language Models (LLMs) have recently emerged as a focal point of research and application, driven by their unprecedented ability to understand and generate text with human-like quality. Even more recently, LLMs have been extended into multi-modal large language models (MM-LLMs) which extends their capabilities to deal with image, video and audio information, in addition to text. This opens up applications like text-to-video generation, image captioning, text-to-speech, and more and is achieved either by retro-fitting …
abstract application arxiv capabilities cs.ai cs.cl generate human human-like language language models large language large language models llms modal multi-modal quality research review text type vision vision models
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Sr. BI Analyst
@ AkzoNobel | Pune, IN