Web: http://arxiv.org/abs/2205.01626

May 4, 2022, 1:11 a.m. | G.F. Bomarito, P.E. Leser, N.C.M Strauss, K.M. Garbrecht, J.D. Hochhalter

cs.LG updates on arXiv.org arxiv.org

Interpretability and uncertainty quantification in machine learning can
provide justification for decisions, promote scientific discovery and lead to a
better understanding of model behavior. Symbolic regression provides inherently
interpretable machine learning, but relatively little work has focused on the
use of symbolic regression on noisy data and the accompanying necessity to
quantify uncertainty. A new Bayesian framework for genetic-programming-based
symbolic regression (GPSR) is introduced that uses model evidence (i.e.,
marginal likelihood) to formulate replacement probability during the selection
phase of …

arxiv learning models uncertainty

More from arxiv.org / cs.LG updates on arXiv.org

Director, Applied Mathematics & Computational Research Division

@ Lawrence Berkeley National Lab | Berkeley, Ca

Business Data Analyst

@ MainStreet Family Care | Birmingham, AL

Assistant/Associate Professor of the Practice in Business Analytics

@ Georgetown University McDonough School of Business | Washington DC

Senior Data Science Writer

@ NannyML | Remote

Director of AI/ML Engineering

@ Armis Industries | Remote (US only), St. Louis, California

Digital Analytics Manager

@ Patagonia | Ventura, California