April 11, 2024, 4:43 a.m. | Ioannis Romanelis, Vlassis Fotis, Konstantinos Moustakas, Adrian Munteanu

cs.LG updates on arXiv.org arxiv.org

arXiv:2306.10798v3 Announce Type: replace-cross
Abstract: In this paper we delve into the properties of transformers, attained through self-supervision, in the point cloud domain. Specifically, we evaluate the effectiveness of Masked Autoencoding as a pretraining scheme, and explore Momentum Contrast as an alternative. In our study we investigate the impact of data quantity on the learned features, and uncover similarities in the transformer's behavior across domains. Through comprehensive visualiations, we observe that the transformer learns to attend to semantically meaningful regions, …

abstract arxiv cloud contrast cs.cv cs.lg domain explore impact interpretability paper performance pretraining study supervision through transformers type

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne