all AI news
LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking. (arXiv:2204.08387v3 [cs.CL] UPDATED)
July 20, 2022, 1:12 a.m. | Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei
cs.CL updates on arXiv.org arxiv.org
Self-supervised pre-training techniques have achieved remarkable progress in
Document AI. Most multimodal pre-trained models use a masked language modeling
objective to learn bidirectional representations on the text modality, but they
differ in pre-training objectives for the image modality. This discrepancy adds
difficulty to multimodal representation learning. In this paper, we propose
\textbf{LayoutLMv3} to pre-train multimodal Transformers for Document AI with
unified text and image masking. Additionally, LayoutLMv3 is pre-trained with a
word-patch alignment objective to learn cross-modal alignment by predicting …
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Senior Machine Learning Engineer
@ GPTZero | Toronto, Canada
ML/AI Engineer / NLP Expert - Custom LLM Development (x/f/m)
@ HelloBetter | Remote
Doctoral Researcher (m/f/div) in Automated Processing of Bioimages
@ Leibniz Institute for Natural Product Research and Infection Biology (Leibniz-HKI) | Jena
Seeking Developers and Engineers for AI T-Shirt Generator Project
@ Chevon Hicks | Remote
Technical Program Manager, Expert AI Trainer Acquisition & Engagement
@ OpenAI | San Francisco, CA
Director, Data Engineering
@ PatientPoint | Cincinnati, Ohio, United States