all AI news
Comparing Graph Transformers via Positional Encodings
Feb. 23, 2024, 5:42 a.m. | Mitchell Black, Zhengchao Wan, Gal Mishne, Amir Nayyeri, Yusu Wang
cs.LG updates on arXiv.org arxiv.org
Abstract: The distinguishing power of graph transformers is closely tied to the choice of positional encoding: features used to augment the base transformer with information about the graph. There are two primary types of positional encoding: absolute positional encodings (APEs) and relative positional encodings (RPEs). APEs assign features to each node and are given as input to the transformer. RPEs instead assign a feature to each pair of nodes, e.g., graph distance, and are used to …
abstract arxiv cs.lg encoding features graph information positional encoding power transformer transformers type types via
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne