all AI news
Improving Diversity with Adversarially Learned Transformations for Domain Generalization. (arXiv:2206.07736v1 [cs.LG])
Web: http://arxiv.org/abs/2206.07736
June 17, 2022, 1:10 a.m. | Tejas Gokhale, Rushil Anirudh, Jayaraman J. Thiagarajan, Bhavya Kailkhura, Chitta Baral, Yezhou Yang
cs.LG updates on arXiv.org arxiv.org
To be successful in single source domain generalization, maximizing diversity
of synthesized domains has emerged as one of the most effective strategies.
Many of the recent successes have come from methods that pre-specify the types
of diversity that a model is exposed to during training, so that it can
ultimately generalize well to new domains. However, na\"ive diversity based
augmentations do not work effectively for domain generalization either because
they cannot model large domain shift, or because the span of …
More from arxiv.org / cs.LG updates on arXiv.org
Latest AI/ML/Big Data Jobs
Machine Learning Researcher - Saalfeld Lab
@ Howard Hughes Medical Institute - Chevy Chase, MD | Ashburn, Virginia
Project Director, Machine Learning in US Health
@ ideas42.org | Remote, US
Data Science Intern
@ NannyML | Remote
Machine Learning Engineer NLP/Speech
@ Play.ht | Remote
Research Scientist, 3D Reconstruction
@ Yembo | Remote, US
Clinical Assistant or Associate Professor of Management Science and Systems
@ University at Buffalo | Buffalo, NY