Web: http://arxiv.org/abs/2203.07836

May 5, 2022, 1:11 a.m. | Xuefeng Bai, Yulong Chen, Yue Zhang

cs.CL updates on arXiv.org arxiv.org

Abstract meaning representation (AMR) highlights the core semantic
information of text in a graph structure. Recently, pre-trained language models
(PLMs) have advanced tasks of AMR parsing and AMR-to-text generation,
respectively. However, PLMs are typically pre-trained on textual data, thus are
sub-optimal for modeling structural knowledge. To this end, we investigate
graph self-supervised training to improve the structure awareness of PLMs over
AMR graphs. In particular, we introduce two graph auto-encoding strategies for
graph-to-graph pre-training and four tasks to integrate text …

arxiv graph parsing pre-training training

More from arxiv.org / cs.CL updates on arXiv.org

Data & Insights Strategy & Innovation General Manager

@ Chevron Services Company, a division of Chevron U.S.A Inc. | Houston, TX

Faculty members in Research areas such as Bayesian and Spatial Statistics; Data Privacy and Security; AI/ML; NLP; Image and Video Data Analysis

@ Ahmedabad University | Ahmedabad, India

Director, Applied Mathematics & Computational Research Division

@ Lawrence Berkeley National Lab | Berkeley, Ca

Business Data Analyst

@ MainStreet Family Care | Birmingham, AL

Assistant/Associate Professor of the Practice in Business Analytics

@ Georgetown University McDonough School of Business | Washington DC

Senior Data Science Writer

@ NannyML | Remote