all AI news
Visual Clues: Bridging Vision and Language Foundations for Image Paragraph Captioning. (arXiv:2206.01843v2 [cs.CV] UPDATED)
Sept. 16, 2022, 1:15 a.m. | Yujia Xie, Luowei Zhou, Xiyang Dai, Lu Yuan, Nguyen Bach, Ce Liu, Michael Zeng
cs.CV updates on arXiv.org arxiv.org
People say, "A picture is worth a thousand words". Then how can we get the
rich information out of the image? We argue that by using visual clues to
bridge large pretrained vision foundation models and language models, we can do
so without any extra cross-modal training. Thanks to the strong zero-shot
capability of foundation models, we start by constructing a rich semantic
representation of the image (e.g., image tags, object attributes / locations,
captions) as a structured textual prompt, …
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Data Management Assistant
@ World Vision | Amman Office, Jordan
Cloud Data Engineer, Global Services Delivery, Google Cloud
@ Google | Buenos Aires, Argentina