April 17, 2023, 8:13 p.m. | Yu-Qi Yang, Yu-Xiao Guo, Jian-Yu Xiong, Yang Liu, Hao Pan, Peng-Shuai Wang, Xin Tong, Baining Guo

cs.CV updates on arXiv.org arxiv.org

Pretrained backbones with fine-tuning have been widely adopted in 2D vision
and natural language processing tasks and demonstrated significant advantages
to task-specific networks. In this paper, we present a pretrained 3D backbone,
named {\SST}, which first outperforms all state-of-the-art methods in
downstream 3D indoor scene understanding tasks. Our backbone network is based
on a 3D Swin transformer and carefully designed to efficiently conduct
self-attention on sparse voxels with linear memory complexity and capture the
irregularity of point signals via generalized …

advantages and natural language processing art arxiv attention complexity embedding fine-tuning generalized language language processing linear memory natural natural language natural language processing network networks paper processing self-attention state swin transformer understanding vision

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Global Data Architect, AVP - State Street Global Advisors

@ State Street | Boston, Massachusetts

Data Engineer

@ NTT DATA | Pune, MH, IN