all AI news
Self-Attention in Colors: Another Take on Encoding Graph Structure in Transformers. (arXiv:2304.10933v1 [cs.LG])
cs.LG updates on arXiv.org arxiv.org
We introduce a novel self-attention mechanism, which we call CSA (Chromatic
Self-Attention), which extends the notion of attention scores to attention
_filters_, independently modulating the feature channels. We showcase CSA in a
fully-attentional graph Transformer CGT (Chromatic Graph Transformer) which
integrates both graph structural information and edge features, completely
bypassing the need for local message-passing components. Our method flexibly
encodes graph structure through node-node interactions, by enriching the
original edge features with a relative positional encoding scheme. We propose a …
arxiv attention call colors components edge encoding feature features graph information interactions node notion novel positional encoding random self-attention transformer transformers