April 18, 2022, 1:11 a.m. | Ronitt Rubinfeld, Arsen Vasilyan

cs.LG updates on arXiv.org arxiv.org

There are many important high dimensional function classes that have fast
agnostic learning algorithms when strong assumptions on the distribution of
examples can be made, such as Gaussianity or uniformity over the domain. But
how can one be sufficiently confident that the data indeed satisfies the
distributional assumption, so that one can trust in the output quality of the
agnostic learning algorithm? We propose a model by which to systematically
study the design of tester-learner pairs $(\mathcal{A},\mathcal{T})$, such that
if …

algorithms arxiv assumptions learning testing

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Manager, Data Management & Insights Asia

@ Swiss Re | Bengaluru, KA, IN

Data Science Co-op

@ Authenticate | United States - Remote

Intern 2024 - Data Engineer, Smart MFG & AI

@ Micron Technology | Taoyuan - Fab 11, Taiwan

Data Engineer

@ Nine | Sydney, Australia