Oct. 7, 2022, 1:16 a.m. | Jingye Chen, Tengchao Lv, Lei Cui, Cha Zhang, Furu Wei

cs.CL updates on arXiv.org arxiv.org

The surge of pre-training has witnessed the rapid development of document
understanding recently. Pre-training and fine-tuning framework has been
effectively used to tackle texts in various formats, including plain texts,
document texts, and web texts. Despite achieving promising performance,
existing pre-trained models usually target one specific document format at one
time, making it difficult to combine knowledge from multiple document formats.
To address this, we propose XDoc, a unified pre-trained model which deals with
different document formats in a single …

arxiv document understanding format pre-training training understanding

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Consultant Senior Power BI & Azure - CDI - H/F

@ Talan | Lyon, France