all AI news
[R] Baidu’s 10-Billion Scale ERNIE-ViLG Unified Generative Pretraining Framework Achieves SOTA Performance on Bidirectional Vision-Language Generation Tasks
Artificial Intelligence www.reddit.com
Baidu researchers propose ERNIE-ViLG, a 10-billion parameter scale pretraining framework for bidirectional text-image generation. Pretrained on 145 million (Chinese) image-text pairs, ERNIE-ViLG achieves state-of-the-art performance on both text-to-image and image-to-text generation tasks.
Here is a quick read: Baidu’s 10-Billion Scale ERNIE-ViLG Unified Generative Pretraining Framework Achieves SOTA Performance on Bidirectional Vision-Language Generation Tasks.
The paper ERNIE-ViLG: Unified Generative Pre-training for Bidirectional Vision-Language Generation is on arXiv.
submitted by /u/Yuqing7[link] [comments]
artificial baidu framework language language generation performance sota vision