Web: http://arxiv.org/abs/2201.11989

Jan. 31, 2022, 2:11 a.m. | Naoki Sato, Hideaki Iiduka

cs.LG updates on arXiv.org arxiv.org

Previous numerical results have shown that a two time-scale update rule
(TTUR) using constant learning rates is practically useful for training
generative adversarial networks (GANs). Meanwhile, a theoretical analysis of
TTUR to find a stationary local Nash equilibrium of a Nash equilibrium problem
with two players, a discriminator and a generator, has been given using
decaying learning rates. In this paper, we give a theoretical analysis of TTUR
using constant learning rates to bridge the gap between theory and practice. …

arxiv learning networks rate scale time training

More from arxiv.org / cs.LG updates on arXiv.org

Machine Learning Product Manager (Europe, Remote)

@ FreshBooks | Germany

Field Operations and Data Engineer, ADAS

@ Lucid Motors | Newark, CA

Machine Learning Engineer - Senior

@ Novetta | Reston, VA

Analytics Engineer

@ ThirdLove | Remote

Senior Machine Learning Infrastructure Engineer - Safety

@ Discord | San Francisco, CA or Remote

Internship, Data Scientist

@ Everstream Analytics | United States (Remote)