all AI news
Bagging Provides Assumption-free Stability
April 26, 2024, 4:43 a.m. | Jake A. Soloff, Rina Foygel Barber, Rebecca Willett
cs.LG updates on arXiv.org arxiv.org
Abstract: Bagging is an important technique for stabilizing machine learning models. In this paper, we derive a finite-sample guarantee on the stability of bagging for any model. Our result places no assumptions on the distribution of the data, on the properties of the base algorithm, or on the dimensionality of the covariates. Our guarantee applies to many variants of bagging and is optimal up to a constant. Empirical results validate our findings, showing that bagging successfully …
abstract algorithm arxiv assumptions cs.lg data dimensionality distribution free machine machine learning machine learning models math.st paper sample stability stat.ml stat.th type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Sr. Software Development Manager, AWS Neuron Machine Learning Distributed Training
@ Amazon.com | Cupertino, California, USA