March 19, 2024, 4:41 a.m. | Mintong Kang, Nezihe Merve G\"urel, Linyi Li, Bo Li

cs.LG updates on arXiv.org arxiv.org

arXiv:2403.11348v1 Announce Type: new
Abstract: Conformal prediction has shown spurring performance in constructing statistically rigorous prediction sets for arbitrary black-box machine learning models, assuming the data is exchangeable. However, even small adversarial perturbations during the inference can violate the exchangeability assumption, challenge the coverage guarantees, and result in a subsequent decline in empirical coverage. In this work, we propose a certifiably robust learning-reasoning conformal prediction framework (COLEP) via probabilistic circuits, which comprise a data-driven learning component that trains statistical models …

abstract adversarial arxiv box challenge circuits coverage cs.ai cs.lg data however inference machine machine learning machine learning models performance prediction reasoning robust small stat.ml type via

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote