March 29, 2024, 4:45 a.m. | Yiyu Wang, Hao Luo, Jungang Xu, Yingfei Sun, Fan Wang

cs.CV updates on arXiv.org arxiv.org

arXiv:2403.19193v1 Announce Type: new
Abstract: Supervised image captioning approaches have made great progress, but it is challenging to collect high-quality human-annotated image-text data. Recently, large-scale vision and language models (e.g., CLIP) and large-scale generative language models (e.g., GPT-2) have shown strong performances in various tasks, which also provide some new solutions for image captioning with web paired data, unpaired data or even text-only data. Among them, the mainstream solution is to project image embeddings into the text embedding space with …

abstract arxiv captioning clip cs.cv data data-centric generative gpt gpt-2 human image interactive language language models performances progress prompts quality scale solutions tasks text type vision

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Reporting & Data Analytics Lead (Sizewell C)

@ EDF | London, GB

Data Analyst

@ Notable | San Mateo, CA