Feb. 6, 2024, 5:49 a.m. | Renan A. Rojas-Gomez Karan Singhal Ali Etemad Alex Bijamov Warren R. Morningstar Philip Andrew Mansfield

cs.LG updates on arXiv.org arxiv.org

Existing data augmentation in self-supervised learning, while diverse, fails to preserve the inherent structure of natural images. This results in distorted augmented samples with compromised semantic information, ultimately impacting downstream performance. To overcome this, we propose SASSL: Style Augmentations for Self Supervised Learning, a novel augmentation technique based on Neural Style Transfer. SASSL decouples semantic and stylistic attributes in images and applies transformations exclusively to the style while preserving content, generating diverse samples that better retain semantics. Our technique boosts …

augmentation cs.cv cs.lg data diverse images information natural novel performance samples self-supervised learning semantic stat.ml style style transfer supervised learning transfer via

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US