Jan. 7, 2022, 3:16 p.m. | /u/Yuqing7

Artificial Intelligence www.reddit.com

Baidu researchers propose ERNIE-ViLG, a 10-billion parameter scale pretraining framework for bidirectional text-image generation. Pretrained on 145 million (Chinese) image-text pairs, ERNIE-ViLG achieves state-of-the-art performance on both text-to-image and image-to-text generation tasks.

Here is a quick read: Baidu’s 10-Billion Scale ERNIE-ViLG Unified Generative Pretraining Framework Achieves SOTA Performance on Bidirectional Vision-Language Generation Tasks.

The paper ERNIE-ViLG: Unified Generative Pre-training for Bidirectional Vision-Language Generation is on arXiv.

submitted by /u/Yuqing7
[link] [comments]

artificial baidu framework language language generation performance sota vision

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Senior Engineer - Data Science Operations

@ causaLens | London - Hybrid, England, United Kingdom

F0138 - LLM Developer (AI NLP)

@ Ubiquiti Inc. | Taipei

Staff Engineer, Database

@ Nagarro | Gurugram, India

Artificial Intelligence Assurance Analyst

@ Booz Allen Hamilton | USA, VA, McLean (8251 Greensboro Dr)