all AI news
LayoutLMv3: Pre-training for Document AI with Unified Text and Image Masking. (arXiv:2204.08387v3 [cs.CL] UPDATED)
July 20, 2022, 1:12 a.m. | Yupan Huang, Tengchao Lv, Lei Cui, Yutong Lu, Furu Wei
cs.CL updates on arXiv.org arxiv.org
Self-supervised pre-training techniques have achieved remarkable progress in
Document AI. Most multimodal pre-trained models use a masked language modeling
objective to learn bidirectional representations on the text modality, but they
differ in pre-training objectives for the image modality. This discrepancy adds
difficulty to multimodal representation learning. In this paper, we propose
\textbf{LayoutLMv3} to pre-train multimodal Transformers for Document AI with
unified text and image masking. Additionally, LayoutLMv3 is pre-trained with a
word-patch alignment objective to learn cross-modal alignment by predicting …
More from arxiv.org / cs.CL updates on arXiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
DevOps Engineer (Data Team)
@ Reward Gateway | Sofia/Plovdiv