Web: http://arxiv.org/abs/2205.03246

May 9, 2022, 1:11 a.m. | Yeshwanth Cherapanamjeri, Constantinos Daskalakis, Andrew Ilyas, Manolis Zampetakis

cs.LG updates on arXiv.org arxiv.org

In the classical setting of self-selection, the goal is to learn $k$ models,
simultaneously from observations $(x^{(i)}, y^{(i)})$ where $y^{(i)}$ is the
output of one of $k$ underlying models on input $x^{(i)}$. In contrast to
mixture models, where we observe the output of a randomly selected model, here
the observed model depends on the outputs themselves, and is determined by some
known selection criterion. For example, we might observe the highest output,
the smallest output, or the median output of …

arxiv bias good linear linear regression math regression

More from arxiv.org / cs.LG updates on arXiv.org

Data Analyst, Patagonia Action Works

@ Patagonia | Remote

Data & Insights Strategy & Innovation General Manager

@ Chevron Services Company, a division of Chevron U.S.A Inc. | Houston, TX

Faculty members in Research areas such as Bayesian and Spatial Statistics; Data Privacy and Security; AI/ML; NLP; Image and Video Data Analysis

@ Ahmedabad University | Ahmedabad, India

Director, Applied Mathematics & Computational Research Division

@ Lawrence Berkeley National Lab | Berkeley, Ca

Business Data Analyst

@ MainStreet Family Care | Birmingham, AL

Assistant/Associate Professor of the Practice in Business Analytics

@ Georgetown University McDonough School of Business | Washington DC