all AI news
Optimal convex $M$-estimation via score matching
March 26, 2024, 4:49 a.m. | Oliver Y. Feng, Yu-Chun Kao, Min Xu, Richard J. Samworth
stat.ML updates on arXiv.org arxiv.org
Abstract: In the context of linear regression, we construct a data-driven convex loss function with respect to which empirical risk minimisation yields optimal asymptotic variance in the downstream estimation of the regression coefficients. Our semiparametric approach targets the best decreasing approximation of the derivative of the log-density of the noise distribution. At the population level, this fitting process is a nonparametric extension of score matching, corresponding to a log-concave projection of the noise distribution with respect …
abstract approximation arxiv construct context data data-driven function linear linear regression loss math.st noise regression risk stat.me stat.ml stat.th targets type variance via
More from arxiv.org / stat.ML updates on arXiv.org
Learning linear dynamical systems under convex constraints
2 days, 12 hours ago |
arxiv.org
Inverse Unscented Kalman Filter
3 days, 12 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne