all AI news
Mitigating multiple descents: A model-agnostic framework for risk monotonization. (arXiv:2205.12937v1 [math.ST])
May 26, 2022, 1:11 a.m. | Pratik Patil, Arun Kumar Kuchibhotla, Yuting Wei, Alessandro Rinaldo
stat.ML updates on arXiv.org arxiv.org
Recent empirical and theoretical analyses of several commonly used prediction
procedures reveal a peculiar risk behavior in high dimensions, referred to as
double/multiple descent, in which the asymptotic risk is a non-monotonic
function of the limiting aspect ratio of the number of features or parameters
to the sample size. To mitigate this undesirable behavior, we develop a general
framework for risk monotonization based on cross-validation that takes as input
a generic prediction procedure and returns a modified procedure whose
out-of-sample …
More from arxiv.org / stat.ML updates on arXiv.org
Estimation Sample Complexity of a Class of Nonlinear Continuous-time Systems
2 days, 15 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Machine Learning Engineer (m/f/d)
@ StepStone Group | Düsseldorf, Germany
2024 GDIA AI/ML Scientist - Supplemental
@ Ford Motor Company | United States