March 26, 2024, 4:49 a.m. | Oliver Y. Feng, Yu-Chun Kao, Min Xu, Richard J. Samworth

stat.ML updates on arXiv.org arxiv.org

arXiv:2403.16688v1 Announce Type: cross
Abstract: In the context of linear regression, we construct a data-driven convex loss function with respect to which empirical risk minimisation yields optimal asymptotic variance in the downstream estimation of the regression coefficients. Our semiparametric approach targets the best decreasing approximation of the derivative of the log-density of the noise distribution. At the population level, this fitting process is a nonparametric extension of score matching, corresponding to a log-concave projection of the noise distribution with respect …

abstract approximation arxiv construct context data data-driven function linear linear regression loss math.st noise regression risk stat.me stat.ml stat.th targets type variance via

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne