Feb. 23, 2024, 5:43 a.m. | Davoud Ataee Tarzanagh, Yingcong Li, Christos Thrampoulidis, Samet Oymak

cs.LG updates on arXiv.org arxiv.org

arXiv:2308.16898v3 Announce Type: replace
Abstract: Since its inception in "Attention Is All You Need", transformer architecture has led to revolutionary advancements in NLP. The attention layer within the transformer admits a sequence of input tokens $X$ and makes them interact through pairwise similarities computed as softmax$(XQK^\top X^\top)$, where $(K,Q)$ are the trainable key-query parameters. In this work, we establish a formal equivalence between the optimization geometry of self-attention and a hard-margin SVM problem that separates optimal input tokens from non-optimal …

abstract architecture arxiv attention attention is all you need cs.ai cs.cl cs.lg key layer machines math.oc nlp parameters query softmax support support vector machines them through tokens transformer transformer architecture transformers type vector

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York