March 19, 2024, 4:42 a.m. | Alkis Kalavasis, Ilias Zadik, Manolis Zampetakis

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.11963v1 Announce Type: new
Abstract: We study the fundamental problem of transfer learning where a learning algorithm collects data from some source distribution $P$ but needs to perform well with respect to a different target distribution $Q$. A standard change of measure argument implies that transfer learning happens when the density ratio $dQ/dP$ is bounded. Yet, prior thought-provoking works by Kpotufe and Martinet (COLT, 2018) and Hanneke and Kpotufe (NeurIPS, 2019) demonstrate cases where the ratio $dQ/dP$ is unbounded, but …

abstract algorithm arxiv beyond change cs.ds cs.lg data distribution math.st standard stat.ml stat.th study transfer transfer learning type

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote