all AI news
Casual Conversations v2: Designing a large consent-driven dataset to measure algorithmic bias and robustness. (arXiv:2211.05809v1 [cs.CV])
cs.CL updates on arXiv.org arxiv.org
Developing robust and fair AI systems require datasets with comprehensive set
of labels that can help ensure the validity and legitimacy of relevant
measurements. Recent efforts, therefore, focus on collecting person-related
datasets that have carefully selected labels, including sensitive
characteristics, and consent forms in place to use those attributes for model
testing and development. Responsible data collection involves several stages,
including but not limited to determining use-case scenarios, selecting
categories (annotations) such that the data are fit for the purpose …
algorithmic bias arxiv bias consent conversations dataset robustness