all AI news
Learning Algorithm Generalization Error Bounds via Auxiliary Distributions
April 18, 2024, 4:43 a.m. | Gholamali Aminian, Saeed Masiha, Laura Toni, Miguel R. D. Rodrigues
stat.ML updates on arXiv.org arxiv.org
Abstract: Generalization error bounds are essential for comprehending how well machine learning models work. In this work, we suggest a novel method, i.e., the Auxiliary Distribution Method, that leads to new upper bounds on expected generalization errors that are appropriate for supervised learning scenarios. We show that our general upper bounds can be specialized under some conditions to new bounds involving the $\alpha$-Jensen-Shannon, $\alpha$-R\'enyi ($0< \alpha < 1$) information between a random variable modeling the set …
abstract algorithm arxiv cs.it cs.lg distribution error errors leads machine machine learning machine learning models math.it novel show stat.ml supervised learning type via work
More from arxiv.org / stat.ML updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Tableau/PowerBI Developer (A.Con)
@ KPMG India | Bengaluru, Karnataka, India
Software Engineer, Backend - Data Platform (Big Data Infra)
@ Benchling | San Francisco, CA