all AI news
VIAssist: Adapting Multi-modal Large Language Models for Users with Visual Impairments
April 4, 2024, 4:42 a.m. | Bufang Yang, Lixing He, Kaiwei Liu, Zhenyu Yan
cs.LG updates on arXiv.org arxiv.org
Abstract: Individuals with visual impairments, encompassing both partial and total difficulties in visual perception, are referred to as visually impaired (VI) people. An estimated 2.2 billion individuals worldwide are affected by visual impairments. Recent advancements in multi-modal large language models (MLLMs) have showcased their extraordinary capabilities across various domains. It is desirable to help VI individuals with MLLMs' great capabilities of visual understanding and reasoning. However, it is challenging for VI people to use MLLMs due …
abstract arxiv billion cs.ai cs.cv cs.lg language language models large language large language models mllms modal multi-modal people perception total type visual
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US