all AI news
Private Prediction Sets
March 5, 2024, 2:44 p.m. | Anastasios N. Angelopoulos, Stephen Bates, Tijana Zrnic, Michael I. Jordan
cs.LG updates on arXiv.org arxiv.org
Abstract: In real-world settings involving consequential decision-making, the deployment of machine learning systems generally requires both reliable uncertainty quantification and protection of individuals' privacy. We present a framework that treats these two desiderata jointly. Our framework is based on conformal prediction, a methodology that augments predictive models to return prediction sets that provide uncertainty quantification -- they provably cover the true response with a user-specified probability, such as 90%. One might hope that when used with …
abstract arxiv cs.ai cs.cr cs.lg decision deployment framework learning systems machine machine learning making methodology prediction predictive predictive models privacy protection quantification stat.me stat.ml systems type uncertainty world
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US