Web: http://arxiv.org/abs/2201.09332

Jan. 28, 2022, 2:11 a.m. | Anson Bastos, Abhishek Nadgeri, Kuldeep Singh, Hiroki Kanezashi, Toyotaro Suzumura, Isaiah Onando Mulang'

cs.LG updates on arXiv.org arxiv.org

Transformers have been proven to be inadequate for graph representation
learning. To understand this inadequacy, there is need to investigate if
spectral analysis of transformer will reveal insights on its expressive power.
Similar studies already established that spectral analysis of Graph neural
networks (GNNs) provides extra perspectives on their expressiveness. In this
work, we systematically study and prove the link between the spatial and
spectral domain in the realm of the transformer. We further provide a
theoretical analysis that the …

arxiv graphs transformer

More from arxiv.org / cs.LG updates on arXiv.org

Data Analytics and Technical support Lead

@ Coupa Software, Inc. | Bogota, Colombia

Data Science Manager

@ Vectra | San Jose, CA

Data Analyst Sr

@ Capco | Brazil - Sao Paulo

Data Scientist (NLP)

@ Builder.ai | London, England, United Kingdom - Remote

Senior Data Analyst

@ BuildZoom | Scottsdale, AZ/ San Francisco, CA/ Remote

Senior Research Scientist, Speech Recognition

@ SoundHound Inc. | Toronto, Canada