all AI news
Voxel-MAE: Masked Autoencoders for Pre-training Large-scale Point Clouds. (arXiv:2206.09900v4 [cs.CV] UPDATED)
Aug. 17, 2022, 1:12 a.m. | Chen Min, Xinli Xu, Dawei Zhao, Liang Xiao, Yiming Nie, Bin Dai
cs.CV updates on arXiv.org arxiv.org
Mask-based pre-training has achieved great success for self-supervised
learning in images and languages without manually annotated supervision.
However, it has not yet been studied for large-scale point clouds with
redundant spatial information. In this research, we propose a mask voxel
autoencoder network for pre-training large-scale point clouds, dubbed
Voxel-MAE. Our key idea is to transform the point clouds into voxel
representations and classify whether the voxel contains point clouds. This
simple but effective strategy makes the network voxel-aware of the …
More from arxiv.org / cs.CV updates on arXiv.org
Jobs in AI, ML, Big Data
Data Scientist (m/f/x/d)
@ Symanto Research GmbH & Co. KG | Spain, Germany
Data Engineer
@ Paxos | Remote - United States
Data Analytics Specialist
@ Media.Monks | Kuala Lumpur
Software Engineer III- Pyspark
@ JPMorgan Chase & Co. | India
Engineering Manager, Data Infrastructure
@ Dropbox | Remote - Canada
Senior AI NLP Engineer
@ Hyro | Tel Aviv-Yafo, Tel Aviv District, Israel