March 18, 2024, 4:48 a.m. | Minjun Kim, Seungwoo Song, Youhan Lee, Haneol Jang, Kyungtae Lim

cs.CL updates on arXiv.org arxiv.org

arXiv:2401.06443v2 Announce Type: replace
Abstract: The current research direction in generative models, such as the recently developed GPT4, aims to find relevant knowledge information for multimodal and multilingual inputs to provide answers. Under these research circumstances, the demand for multilingual evaluation of visual question answering (VQA) tasks, a representative task of multimodal systems, has increased. Accordingly, we propose a bilingual outside-knowledge VQA (BOK-VQA) dataset in this study that can be extended to multilingualism. The proposed data include 17K images, 17K …

abstract arxiv bilingual cs.cl current demand evaluation generative generative models gpt4 graph graph representation information inputs knowledge multilingual multimodal pretraining question question answering representation research tasks type via visual vqa

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Data Engineer

@ Quantexa | Sydney, New South Wales, Australia

Staff Analytics Engineer

@ Warner Bros. Discovery | NY New York 230 Park Avenue South