May 27, 2022, 1:11 a.m. | Federica Gerace, Florent Krzakala, Bruno Loureiro, Ludovic Stephan, Lenka Zdeborová

stat.ML updates on arXiv.org arxiv.org

While classical in many theoretical settings, the assumption of Gaussian
i.i.d. inputs is often perceived as a strong limitation in the analysis of
high-dimensional learning. In this study, we redeem this line of work in the
case of generalized linear classification with random labels. Our main
contribution is a rigorous proof that data coming from a range of generative
models in high-dimensions have the same minimum training loss as Gaussian data
with corresponding data covariance. In particular, our theorem covers …

arxiv classifiers labels linear ml random

Data Scientist (m/f/x/d)

@ Symanto Research GmbH & Co. KG | Spain, Germany

Enterprise Data Architect

@ Pathward | Remote

Diagnostic Imaging Information Systems (DIIS) Technologist

@ Nova Scotia Health Authority | Halifax, NS, CA, B3K 6R8

Intern Data Scientist - Residual Value Risk Management (f/m/d)

@ BMW Group | Munich, DE

Analytics Engineering Manager

@ PlayStation Global | United Kingdom, London

Junior Insight Analyst (PR&Comms)

@ Signal AI | Lisbon, Lisbon, Portugal