April 9, 2024, 4:48 a.m. | Jiaxuan Li, Duc Minh Vo, Akihiro Sugimoto, Hideki Nakayama

cs.CV updates on arXiv.org arxiv.org

arXiv:2311.15879v2 Announce Type: replace
Abstract: Large language models (LLMs)-based image captioning has the capability of describing objects not explicitly observed in training data; yet novel objects occur frequently, necessitating the requirement of sustaining up-to-date object knowledge for open-world comprehension. Instead of relying on large amounts of data and/or scaling up network parameters, we introduce a highly effective retrieval-augmented image captioning method that prompts LLMs with object names retrieved from External Visual--name memory (EVCap). We build ever-changing object knowledge memory using …

abstract arxiv capability captioning cs.cv data image knowledge language language models large language large language models llms memory novel object objects open-world retrieval retrieval-augmented scaling training training data type visual world

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Business Data Scientist, gTech Ads

@ Google | Mexico City, CDMX, Mexico

Lead, Data Analytics Operations

@ Zocdoc | Pune, Maharashtra, India