all AI news
Understanding Generalization via Leave-One-Out Conditional Mutual Information. (arXiv:2206.14800v1 [cs.LG])
June 30, 2022, 1:10 a.m. | Mahdi Haghifam, Shay Moran, Daniel M. Roy, Gintare Karolina Dziugaite
cs.LG updates on arXiv.org arxiv.org
We study the mutual information between (certain summaries of) the output of
a learning algorithm and its $n$ training data, conditional on a supersample of
$n+1$ i.i.d. data from which the training data is chosen at random without
replacement. These leave-one-out variants of the conditional mutual information
(CMI) of an algorithm (Steinke and Zakynthinou, 2020) are also seen to control
the mean generalization error of learning algorithms with bounded loss
functions. For learning algorithms achieving zero empirical risk under 0-1 …
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US