May 9, 2022, 1:11 a.m. | Yeshwanth Cherapanamjeri, Constantinos Daskalakis, Andrew Ilyas, Manolis Zampetakis

cs.LG updates on arXiv.org arxiv.org

In the classical setting of self-selection, the goal is to learn $k$ models,
simultaneously from observations $(x^{(i)}, y^{(i)})$ where $y^{(i)}$ is the
output of one of $k$ underlying models on input $x^{(i)}$. In contrast to
mixture models, where we observe the output of a randomly selected model, here
the observed model depends on the outputs themselves, and is determined by some
known selection criterion. For example, we might observe the highest output,
the smallest output, or the median output of …

arxiv bias good linear linear regression math regression

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Data Strategy & Management - Private Equity Sector - Manager - Consulting - Location OPEN

@ EY | New York City, US, 10001-8604

Data Engineer- People Analytics

@ Volvo Group | Gothenburg, SE, 40531