all AI news
Efficient Modeling of Future Context for Image Captioning. (arXiv:2207.10897v1 [cs.CV])
July 25, 2022, 1:12 a.m. | Zhengcong Fei, Junshi Huang, Xiaoming Wei, Xiaolin Wei
cs.CV updates on arXiv.org arxiv.org
Existing approaches to image captioning usually generate the sentence
word-by-word from left to right, with the constraint of conditioned on local
context including the given image and history generated words. There have been
many studies target to make use of global information during decoding, e.g.,
iterative refinement. However, it is still under-explored how to effectively
and efficiently incorporate the future context. To respond to this issue,
inspired by that Non-Autoregressive Image Captioning (NAIC) can leverage
two-side relation with modified mask …
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Data Integration Specialist
@ Accenture Federal Services | San Antonio, TX
Geospatial Data Engineer - Location Intelligence
@ Allegro | Warsaw, Poland
Site Autonomy Engineer (Onsite)
@ May Mobility | Tokyo, Japan
Summer Intern, AI (Artificial Intelligence)
@ Nextech Systems | Tampa, FL
Permitting Specialist/Wetland Scientist
@ AECOM | Chelmsford, MA, United States