Feb. 23, 2024, 5:46 a.m. | Ifeoma Veronica Nwabufo, Jan Niklas B\"ohm, Philipp Berens, Dmitry Kobak

cs.CV updates on arXiv.org arxiv.org

arXiv:2402.14566v1 Announce Type: new
Abstract: Self-supervised learning methods based on data augmentations, such as SimCLR, BYOL, or DINO, allow obtaining semantically meaningful representations of image datasets and are widely used prior to supervised fine-tuning. A recent self-supervised learning method, $t$-SimCNE, uses contrastive learning to directly train a 2D representation suitable for visualisation. When applied to natural image datasets, $t$-SimCNE yields 2D visualisations with semantically meaningful clusters. In this work, we used $t$-SimCNE to visualise medical image datasets, including examples from …

abstract arxiv byol cs.cv data datasets fine-tuning image image datasets medical prior representation self-supervised learning supervised fine-tuning supervised learning train type

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Intern - Robotics Industrial Engineer Summer 2024

@ Vitesco Technologies | Seguin, US