March 8, 2024, 5:45 a.m. | Miles Everett, Mingjun Zhong, Georgios Leontidis

cs.CV updates on arXiv.org arxiv.org

arXiv:2403.04724v1 Announce Type: new
Abstract: We propose Masked Capsule Autoencoders (MCAE), the first Capsule Network that utilises pretraining in a self-supervised manner. Capsule Networks have emerged as a powerful alternative to Convolutional Neural Networks (CNNs), and have shown favourable properties when compared to Vision Transformers (ViT), but have struggled to effectively learn when presented with more complex data, leading to Capsule Network models that do not scale to modern tasks. Our proposed MCAE model alleviates this issue by reformulating the …

abstract arxiv autoencoders capsule cnns convolutional neural networks cs.cv learn network networks neural networks pretraining transformers type vision vision transformers vit

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote