March 29, 2024, 4:45 a.m. | Yiyu Wang, Hao Luo, Jungang Xu, Yingfei Sun, Fan Wang

cs.CV updates on arXiv.org arxiv.org

arXiv:2403.19193v1 Announce Type: new
Abstract: Supervised image captioning approaches have made great progress, but it is challenging to collect high-quality human-annotated image-text data. Recently, large-scale vision and language models (e.g., CLIP) and large-scale generative language models (e.g., GPT-2) have shown strong performances in various tasks, which also provide some new solutions for image captioning with web paired data, unpaired data or even text-only data. Among them, the mainstream solution is to project image embeddings into the text embedding space with …

abstract arxiv captioning clip cs.cv data data-centric generative gpt gpt-2 human image interactive language language models performances progress prompts quality scale solutions tasks text type vision

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US