April 18, 2024, 2:02 a.m. | /u/darthjaja6

Machine Learning www.reddit.com

https://preview.redd.it/h13z13eua5vc1.png?width=640&format=png&auto=webp&s=397d5127453b2f4a1d6f6df28fb5fc8a2f2f0cff

I think the VQ loss and perceptual loss look normal, but I feel it's hard to understand why discriminator goes towards the completely different direction...anyone has seen similar things before?

More details: I'm training the vqgan on imagenet from the paper Taming Transformers for High-Resolution Image Synthesis

image imagenet look loss machinelearning normal paper resolution synthesis think training transformers

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Tableau/PowerBI Developer (A.Con)

@ KPMG India | Bengaluru, Karnataka, India

Software Engineer, Backend - Data Platform (Big Data Infra)

@ Benchling | San Francisco, CA