Jan. 24, 2022, 2:10 a.m. | Zhanghao Wu, Paras Jain, Matthew A. Wright, Azalia Mirhoseini, Joseph E. Gonzalez, Ion Stoica

cs.LG updates on arXiv.org arxiv.org

Graph neural networks are powerful architectures for structured datasets.
However, current methods struggle to represent long-range dependencies. Scaling
the depth or width of GNNs is insufficient to broaden receptive fields as
larger GNNs encounter optimization instabilities such as vanishing gradients
and representation oversmoothing, while pooling-based approaches have yet to
become as universally useful as in computer vision. In this work, we propose
the use of Transformer-based self-attention to learn long-range pairwise
relationships, with a novel "readout" mechanism to obtain a …

arxiv attention global global attention graph graph neural networks networks neural networks

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Vice President, Data Science, Marketplace

@ Xometry | North Bethesda, Maryland, Lexington, KY, Remote

Field Solutions Developer IV, Generative AI, Google Cloud

@ Google | Toronto, ON, Canada; Atlanta, GA, USA