Aug. 26, 2022, 7:06 p.m. | /u/Singularian2501

Machine Learning www.reddit.com

Paper: [https://arxiv.org/abs/2208.10442](https://arxiv.org/abs/2208.10442)

Github: [https://github.com/microsoft/unilm/tree/master/beit](https://github.com/microsoft/unilm/tree/master/beit) Code will be released here. I only found the link on [paperswithcode.com](https://paperswithcode.com) !

Abstract:

>A big convergence of language, vision, and multimodal pretraining is emerging. In this work, we introduce a general-purpose **multimodal foundation model** **BEiT-3**, which achieves **state-of-the-art transfer performance** on both vision and vision-language tasks. Specifically, we advance the big convergence from three aspects: **backbone architecture, pretraining task, and model scaling up**. We introduce Multiway Transformers for **general-purpose modeling**, where the modular architecture enables …

foreign language foundation model image language machinelearning microsoft multimodal sota vision

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Software Engineer, Generative AI (C++)

@ SoundHound Inc. | Toronto, Canada