March 19, 2024, 4:42 a.m. | Christian Schlauch, Christian Wirth, Nadja Klein

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.11966v1 Announce Type: new
Abstract: Prior parameter distributions provide an elegant way to represent prior expert and world knowledge for informed learning. Previous work has shown that using such informative priors to regularize probabilistic deep learning (DL) models increases their performance and data-efficiency. However, commonly used sampling-based approximations for probabilistic DL models can be computationally expensive, requiring multiple inference passes and longer training times. Promising alternatives are compute-efficient last layer kernel approximations like spectral normalized Gaussian processes (SNGPs). We propose …

abstract arxiv cs.ai cs.lg data deep learning efficiency expert gaussian processes however knowledge performance prediction prior probabilistic deep learning processes sampling trajectory type work world

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Business Intelligence Architect - Specialist

@ Eastman | Hyderabad, IN, 500 008