all AI news
Augmenting Vision Language Pretraining by Learning Codebook with Visual Semantics. (arXiv:2208.00475v1 [cs.CV])
Aug. 2, 2022, 2:13 a.m. | Xiaoyuan Guo, Jiali Duan, C.-C. Jay Kuo, Judy Wawira Gichoya, Imon Banerjee
cs.CV updates on arXiv.org arxiv.org
Language modality within the vision language pretraining framework is
innately discretized, endowing each word in the language vocabulary a semantic
meaning. In contrast, visual modality is inherently continuous and
high-dimensional, which potentially prohibits the alignment as well as fusion
between vision and language modalities. We therefore propose to "discretize"
the visual representation by joint learning a codebook that imbues each visual
token a semantic. We then utilize these discretized visual semantics as
self-supervised ground-truths for building our Masked Image Modeling …
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Data Engineer
@ Chubb | Simsbury, CT, United States
Research Analyst , NA Light Vehicle Powertrain Forecasting
@ S&P Global | US - MI - VIRTUAL
Sr. Data Scientist - ML Ops Job
@ Yash Technologies | Indore, IN
Alternance-Data Management
@ Keolis | Courbevoie, FR, 92400