March 5, 2024, 2:52 p.m. | Yueqi Song, Simran Khanuja, Graham Neubig

cs.CL updates on arXiv.org arxiv.org

arXiv:2403.01404v1 Announce Type: new
Abstract: NLP models today strive for supporting multiple languages and modalities, improving accessibility for diverse users. In this paper, we evaluate their multilingual, multimodal capabilities by testing on a visual reasoning task. We observe that proprietary systems like GPT-4V obtain the best performance on this task now, but open models lag in comparison. Surprisingly, GPT-4V exhibits similar performance between English and other languages, indicating the potential for equitable system development across languages. Our analysis on model …

abstract accessibility arxiv capabilities cs.cl diverse gpt gpt-4v languages multilingual multimodal multimodal capabilities multiple nlp nlp models observe paper performance proprietary reasoning systems testing type visual

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne