all AI news
Domain Generalization by Mutual-Information Regularization with Pre-trained Models. (arXiv:2203.10789v2 [cs.LG] UPDATED)
July 25, 2022, 1:13 a.m. | Junbum Cha, Kyungjae Lee, Sungrae Park, Sanghyuk Chun
cs.CV updates on arXiv.org arxiv.org
Domain generalization (DG) aims to learn a generalized model to an unseen
target domain using only limited source domains. Previous attempts to DG fail
to learn domain-invariant representations only from the source domains due to
the significant domain shifts between training and test domains. Instead, we
re-formulate the DG objective using mutual information with the oracle model, a
model generalized to any possible domain. We derive a tractable variational
lower bound via approximating the oracle model by a pre-trained model, …
arxiv information lg mutual-information pre-trained models regularization
More from arxiv.org / cs.CV updates on arXiv.org
Retrieval-Augmented Egocentric Video Captioning
1 day, 10 hours ago |
arxiv.org
Mirror-Aware Neural Humans
1 day, 10 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US