Aug. 11, 2023, 6:51 a.m. | Simon Dahan, Mariana da Silva, Daniel Rueckert, Emma C Robinson

cs.CV updates on arXiv.org arxiv.org

Self-supervision has been widely explored as a means of addressing the lack
of inductive biases in vision transformer architectures, which limits
generalisation when networks are trained on small datasets. This is crucial in
the context of cortical imaging, where phenotypes are complex and
heterogeneous, but the available datasets are limited in size. This paper
builds upon recent advancements in translating vision transformers to surface
meshes and investigates the potential of Masked AutoEncoder (MAE)
self-supervision for cortical surface learning. By reconstructing …

architectures arxiv autoencoder biases context data datasets imaging inductive masked autoencoder networks small supervision transformer vision

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US