Web: http://arxiv.org/abs/2206.08213

June 17, 2022, 1:11 a.m. | Harsh Rangwani, Sumukh K Aithal, Mayank Mishra, Arihant Jain, R. Venkatesh Babu

cs.LG updates on arXiv.org arxiv.org

Domain adversarial training has been ubiquitous for achieving invariant
representations and is used widely for various domain adaptation tasks. In
recent times, methods converging to smooth optima have shown improved
generalization for supervised learning tasks like classification. In this work,
we analyze the effect of smoothness enhancing formulations on domain
adversarial training, the objective of which is a combination of task loss (eg.
classification, regression, etc.) and adversarial terms. We find that
converging to a smooth minima with respect to …

arxiv lg training

More from arxiv.org / cs.LG updates on arXiv.org

Machine Learning Researcher - Saalfeld Lab

@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia

Project Director, Machine Learning in US Health

@ ideas42.org | Remote, US

Data Science Intern

@ NannyML | Remote

Machine Learning Engineer NLP/Speech

@ Play.ht | Remote

Research Scientist, 3D Reconstruction

@ Yembo | Remote, US

Clinical Assistant or Associate Professor of Management Science and Systems

@ University at Buffalo | Buffalo, NY