Web: http://arxiv.org/abs/2201.10326

Jan. 26, 2022, 2:11 a.m. | Xingguang Yan, Liqiang Lin, Niloy J. Mitra, Dani Lischinski, Danny Cohen-Or, Hui Huang

cs.LG updates on arXiv.org arxiv.org

We present ShapeFormer, a transformer-based network that produces a
distribution of object completions, conditioned on incomplete, and possibly
noisy, point clouds. The resultant distribution can then be sampled to generate
likely completions, each exhibiting plausible shape details while being
faithful to the input. To facilitate the use of transformers for 3D, we
introduce a compact 3D representation, vector quantized deep implicit function,
that utilizes spatial sparsity to represent a close approximation of a 3D shape
by a short sequence of …

arxiv cv transformer

More from arxiv.org / cs.LG updates on arXiv.org

Data Engineer, Buy with Prime

@ Amazon.com | Santa Monica, California, USA

Data Architect – Public Sector Health Data Architect, WWPS

@ Amazon.com | US, VA, Virtual Location - Virginia

[Job 8224] Data Engineer - Developer Senior

@ CI&T | Brazil

Software Engineer, Machine Learning, Planner/Behavior Prediction

@ Nuro, Inc. | Mountain View, California (HQ)

Lead Data Scientist

@ Inspectorio | Ho Chi Minh City, Ho Chi Minh City, Vietnam - Remote

Data Engineer

@ Craftable | Portugal - Remote