all AI news
Swin3D: A Pretrained Transformer Backbone for 3D Indoor Scene Understanding. (arXiv:2304.06906v1 [cs.CV])
cs.CV updates on arXiv.org arxiv.org
Pretrained backbones with fine-tuning have been widely adopted in 2D vision
and natural language processing tasks and demonstrated significant advantages
to task-specific networks. In this paper, we present a pretrained 3D backbone,
named {\SST}, which first outperforms all state-of-the-art methods in
downstream 3D indoor scene understanding tasks. Our backbone network is based
on a 3D Swin transformer and carefully designed to efficiently conduct
self-attention on sparse voxels with linear memory complexity and capture the
irregularity of point signals via generalized …
advantages and natural language processing art arxiv attention complexity embedding fine-tuning generalized language language processing linear memory natural natural language natural language processing network networks paper processing self-attention state swin transformer understanding vision