Feb. 22, 2024, 5:43 a.m. | Jens M\"uller, Lars K\"uhmichel, Martin Rohbeck, Stefan T. Radev, Ullrich K\"othe

cs.LG updates on arXiv.org arxiv.org

arXiv:2312.10107v2 Announce Type: replace
Abstract: In this work, we analyze the conditions under which information about the context of an input $X$ can improve the predictions of deep learning models in new domains. Following work in marginal transfer learning in Domain Generalization (DG), we formalize the notion of context as a permutation-invariant representation of a set of data points that originate from the same domain as the input itself. We offer a theoretical analysis of the conditions under which this …

abstract analyze arxiv benefits context cs.ai cs.lg deep learning domain domains information notion predictions transfer transfer learning type understanding work

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US