Oct. 13, 2022, 1:17 a.m. | Ziyi Wang, Xumin Yu, Yongming Rao, Jie Zhou, Jiwen Lu

cs.CV updates on arXiv.org arxiv.org

Nowadays, pre-training big models on large-scale datasets has become a
crucial topic in deep learning. The pre-trained models with high representation
ability and transferability achieve a great success and dominate many
downstream tasks in natural language processing and 2D vision. However, it is
non-trivial to promote such a pretraining-tuning paradigm to the 3D vision,
given the limited training data that are relatively inconvenient to collect. In
this paper, we provide a new perspective of leveraging pre-trained 2D knowledge
in 3D …

analysis arxiv cloud image p2p pixel

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

#13721 - Data Engineer - AI Model Testing

@ Qualitest | Miami, Florida, United States

Elasticsearch Administrator

@ ManTech | 201BF - Customer Site, Chantilly, VA