all AI news
Masked Autoencoder for Self-Supervised Pre-training on Lidar Point Clouds. (arXiv:2207.00531v2 [cs.CV] UPDATED)
Oct. 25, 2022, 1:13 a.m. | Georg Hess, Johan Jaxing, Elias Svensson, David Hagerman, Christoffer Petersson, Lennart Svensson
cs.LG updates on arXiv.org arxiv.org
Masked autoencoding has become a successful pretraining paradigm for
Transformer models for text, images, and, recently, point clouds. Raw
automotive datasets are suitable candidates for self-supervised pre-training as
they generally are cheap to collect compared to annotations for tasks like 3D
object detection (OD). However, the development of masked autoencoders for
point clouds has focused solely on synthetic and indoor data. Consequently,
existing methods have tailored their representations and models toward small
and dense point clouds with homogeneous point densities.In …
arxiv autoencoder lidar masked autoencoder pre-training training
More from arxiv.org / cs.LG updates on arXiv.org
Testing the Segment Anything Model on radiology data
1 day, 8 hours ago |
arxiv.org
Calorimeter shower superresolution
1 day, 8 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US