Feb. 29, 2024, 5:42 a.m. | Bashir Kazimi, Karina Ruzaeva, Stefan Sandfeld

cs.LG updates on arXiv.org arxiv.org

arXiv:2402.18286v1 Announce Type: cross
Abstract: In this work, we explore the potential of self-supervised learning from unlabeled electron microscopy datasets, taking a step toward building a foundation model in this field. We show how self-supervised pretraining facilitates efficient fine-tuning for a spectrum of downstream tasks, including semantic segmentation, denoising, noise & background removal, and super-resolution. Experimentation with varying model complexities and receptive field sizes reveals the remarkable phenomenon that fine-tuned models of lower complexity consistently outperform more complex models with …

abstract advanced analysis arxiv building cond-mat.mtrl-sci cs.ai cs.cv cs.lg datasets explore fine-tuning foundation foundation model image microscopy pretraining self-supervised learning semantic show spectrum supervised learning tasks type work

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US