all AI news
On Output Activation Functions for Adversarial Losses: A Theoretical Analysis via Variational Divergence Minimization and An Empirical Study on MNIST Classification. (arXiv:1901.08753v3 [cs.LG] UPDATED)
Nov. 8, 2022, 2:12 a.m. | Hao-Wen Dong, Yi-Hsuan Yang
cs.LG updates on arXiv.org arxiv.org
Recent years have seen adversarial losses been applied to many fields. Their
applications extend beyond the originally proposed generative modeling to
conditional generative and discriminative settings. While prior work has
proposed various output activation functions and regularization approaches,
some open questions still remain unanswered. In this paper, we aim to study the
following two research questions: 1) What types of output activation functions
form a well-behaved adversarial loss? 2) How different combinations of output
activation functions and regularization approaches perform …
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Lead Developer (AI)
@ Cere Network | San Francisco, US
Research Engineer
@ Allora Labs | Remote
Ecosystem Manager
@ Allora Labs | Remote
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote