April 18, 2024, 11:14 p.m. | Jimmy Guerrero

DEV Community dev.to

Text-to-image (T2I) diffusion models (such as Stable Diffusion XL, DALL-E 3, etc.) achieve state-of-the-art (SOTA) performance on various compositional T2I benchmarks, at the cost of significant computational resources. For instance, the unCLIP (i.e., DALL-E 2) stack comprises T2I prior and diffusion image decoder. The T2I prior model itself adds a billion parameters, increasing the computational and high-quality data requirements. Maitreya propose the ECLIPSE, a novel contrastive learning method that is both parameter and data-efficient as a way to combat these …

ai art benchmarks computational computer computer vision computervision cost dall dall-e dall-e 2 dall-e 3 datascience decoder diffusion diffusion models etc generative generative models image instance machinelearning meetup performance prior resources robust sota stable diffusion stable diffusion xl stack state text text-to-image vision

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York