Sept. 15, 2022, 1:13 a.m. | Bum Chul Kwon, Jungsoo Lee, Chaeyeon Chung, Nyoungwoo Lee, Ho-Jin Choi, Jaegul Choo

cs.CV updates on arXiv.org arxiv.org

Image classification models often learn to predict a class based on
irrelevant co-occurrences between input features and an output class in
training data. We call the unwanted correlations "data biases," and the visual
features causing data biases "bias factors." It is challenging to identify and
mitigate biases automatically without human intervention. Therefore, we
conducted a design study to find a human-in-the-loop solution. First, we
identified user tasks that capture the bias mitigation process for image
classification models with three experts. …

analytics arxiv augmentation classification dash data image synthetic data visual analytics

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Data Engineer (m/f/d)

@ Project A Ventures | Berlin, Germany

Principle Research Scientist

@ Analog Devices | US, MA, Boston