Feb. 27, 2024, 5:44 a.m. | Jiaxin Shi, Michalis K. Titsias, Andriy Mnih

cs.LG updates on arXiv.org arxiv.org

arXiv:1910.10596v5 Announce Type: replace-cross
Abstract: We introduce a new interpretation of sparse variational approximations for Gaussian processes using inducing points, which can lead to more scalable algorithms than previous methods. It is based on decomposing a Gaussian process as a sum of two independent processes: one spanned by a finite basis of inducing points and the other capturing the remaining variation. We show that this formulation recovers existing approximations and at the same time allows to obtain tighter lower bounds …

abstract algorithms arxiv cs.lg gaussian processes independent inference interpretation process processes scalable stat.ml type

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Alternance DATA/AI Engineer (H/F)

@ SQLI | Le Grand-Quevilly, France