Web: http://arxiv.org/abs/2106.07847

Jan. 26, 2022, 2:11 a.m. | Yujia Bao, Shiyu Chang, Regina Barzilay

cs.LG updates on arXiv.org arxiv.org

While unbiased machine learning models are essential for many applications,
bias is a human-defined concept that can vary across tasks. Given only
input-label pairs, algorithms may lack sufficient information to distinguish
stable (causal) features from unstable (spurious) features. However, related
tasks often share similar biases -- an observation we may leverage to develop
stable classifiers in the transfer setting. In this work, we explicitly inform
the target classifier about unstable features in the source tasks.
Specifically, we derive a representation …

arxiv learning

More from arxiv.org / cs.LG updates on arXiv.org

Director, Data Science (Advocacy & Nonprofit)

@ Civis Analytics | Remote

Data Engineer

@ Rappi | [CO] Bogotá

Data Scientist V, Marketplaces Personalization (Remote)

@ ID.me | United States (U.S.)

Product OPs Data Analyst (Flex/Remote)

@ Scaleway | Paris

Big Data Engineer

@ Risk Focus | Riga, Riga, Latvia

Internship Program: Machine Learning Backend

@ Nextail | Remote job