March 7, 2024, 5:43 a.m. | Mathieu Seraphim, Alexis Lechervy, Florian Yger, Luc Brun, Olivier Etard

cs.LG updates on arXiv.org arxiv.org

arXiv:2309.07579v5 Announce Type: replace
Abstract: In recent years, Transformer-based auto-attention mechanisms have been successfully applied to the analysis of a variety of context-reliant data types, from texts to images and beyond, including data from non-Euclidean geometries. In this paper, we present such a mechanism, designed to classify sequences of Symmetric Positive Definite matrices while preserving their Riemannian geometry throughout the analysis. We apply our method to automatic sleep staging on timeseries of EEG-derived covariance matrices from a standard dataset, obtaining …

abstract analysis arxiv attention attention mechanisms auto beyond context cs.lg data eess.sp images non-euclidean paper positive transformer transformers type types

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Software Engineer, Generative AI (C++)

@ SoundHound Inc. | Toronto, Canada