Web: http://arxiv.org/abs/2209.07258

Sept. 16, 2022, 1:16 a.m. | Liang Li, Ruiying Geng, Bowen Li, Can Ma, Yinliang Yue, Binhua Li, Yongbin Li

cs.CL updates on arXiv.org arxiv.org

Most graph-to-text works are built on the encoder-decoder framework with
cross-attention mechanism. Recent studies have shown that explicitly modeling
the input graph structure can significantly improve the performance. However,
the vanilla structural encoder cannot capture all specialized information in a
single forward pass for all decoding steps, resulting in inaccurate semantic
representations. Meanwhile, the input graph is flatted as an unordered sequence
in the cross attention, ignoring the original graph structure. As a result, the
obtained input graph context vector …

arxiv graph pruning text text generation

More from arxiv.org / cs.CL updates on arXiv.org

Postdoctoral Fellow: ML for autonomous materials discovery

@ Lawrence Berkeley National Lab | Berkeley, CA

Research Scientists

@ ODU Research Foundation | Norfolk, Virginia

Embedded Systems Engineer (Robotics)

@ Neo Cybernetica | Bedford, New Hampshire

2023 Luis J. Alvarez and Admiral Grace M. Hopper Postdoc Fellowship in Computing Sciences

@ Lawrence Berkeley National Lab | San Francisco, CA

Senior Manager Data Scientist

@ NAV | Remote, US

Senior AI Research Scientist

@ Earth Species Project | Remote anywhere

Research Fellow- Center for Security and Emerging Technology (Multiple Opportunities)

@ University of California Davis | Washington, DC

Staff Fellow - Data Scientist

@ U.S. FDA/Center for Devices and Radiological Health | Silver Spring, Maryland

Staff Fellow - Senior Data Engineer

@ U.S. FDA/Center for Devices and Radiological Health | Silver Spring, Maryland

Research Engineer - VFX, Neural Compositing

@ Flawless | Los Angeles, California, United States

[Job-TB] Senior Data Engineer

@ CI&T | Brazil

Data Analytics Engineer

@ The Fork | Paris, France