all AI news
ChebMixer: Efficient Graph Representation Learning with MLP Mixer
March 26, 2024, 4:47 a.m. | Xiaoyan Kui, Haonan Yan, Qinsong Li, Liming Chen, Beiji Zou
cs.CV updates on arXiv.org arxiv.org
Abstract: Graph neural networks have achieved remarkable success in learning graph representations, especially graph Transformer, which has recently shown superior performance on various graph mining tasks. However, graph Transformer generally treats nodes as tokens, which results in quadratic complexity regarding the number of nodes during self-attention computation. The graph MLP Mixer addresses this challenge by using the efficient MLP Mixer technique from computer vision. However, the time-consuming process of extracting graph tokens limits its performance. In …
abstract arxiv attention complexity computation cs.cv graph graph mining graph neural networks graph representation however mining mlp networks neural networks nodes performance representation representation learning results self-attention success tasks tokens transformer type
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Data Scientist
@ Publicis Groupe | New York City, United States
Bigdata Cloud Developer - Spark - Assistant Manager
@ State Street | Hyderabad, India