Feb. 29, 2024, 5:42 a.m. | Bashir Kazimi, Karina Ruzaeva, Stefan Sandfeld

cs.LG updates on arXiv.org arxiv.org

arXiv:2402.18286v1 Announce Type: cross
Abstract: In this work, we explore the potential of self-supervised learning from unlabeled electron microscopy datasets, taking a step toward building a foundation model in this field. We show how self-supervised pretraining facilitates efficient fine-tuning for a spectrum of downstream tasks, including semantic segmentation, denoising, noise & background removal, and super-resolution. Experimentation with varying model complexities and receptive field sizes reveals the remarkable phenomenon that fine-tuned models of lower complexity consistently outperform more complex models with …

abstract advanced analysis arxiv building cond-mat.mtrl-sci cs.ai cs.cv cs.lg datasets explore fine-tuning foundation foundation model image microscopy pretraining self-supervised learning semantic show spectrum supervised learning tasks type work

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US