Nov. 14, 2022, 2:14 a.m. | Caner Hazirbas, Yejin Bang, Tiezheng Yu, Parisa Assar, Bilal Porgali, Vítor Albiero, Stefan Hermanek, Jacqueline Pan, Emily McReynolds, Miranda B

cs.CV updates on arXiv.org arxiv.org

Developing robust and fair AI systems require datasets with comprehensive set
of labels that can help ensure the validity and legitimacy of relevant
measurements. Recent efforts, therefore, focus on collecting person-related
datasets that have carefully selected labels, including sensitive
characteristics, and consent forms in place to use those attributes for model
testing and development. Responsible data collection involves several stages,
including but not limited to determining use-case scenarios, selecting
categories (annotations) such that the data are fit for the purpose …

algorithmic bias arxiv bias consent conversations dataset robustness

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote