Jan. 16, 2024, 9:29 p.m. | /u/spring_m

Machine Learning www.reddit.com

I trained a relatively simple transformer based diffusion model to generate 256 by 256 images from scratch. Here is the repo: [https://github.com/apapiu/transformer\_latent\_diffusion/tree/main](https://github.com/apapiu/transformer_latent_diffusion/tree/main) \- the code should hopefully be fairly easy to understand and self-contained.

Here are some examples after about 30 hours of training on 1A100 from scratch:

[generated images based on various prompts](https://preview.redd.it/ncucpk1pdvcc1.png?width=1564&format=png&auto=webp&s=df65131fee3353ec0f96e9e89483b3978f6f2974)

The model is based on a DiT/Pixart-alpha architecture but with various modifications and simplifications. I also made some questionable decision in terms of the noise schedule …

alpha architecture decision easy examples experiment machinelearning noise params pixart terms training work

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US