March 14, 2024, 4:43 a.m. | Andrea Wynn, Ilia Sucholutsky, Thomas L. Griffiths

cs.LG updates on arXiv.org arxiv.org

arXiv:2312.14106v2 Announce Type: replace-cross
Abstract: How can we build AI systems that are aligned with human values to avoid causing harm or violating societal standards for acceptable behavior? We argue that representational alignment between humans and AI agents facilitates value alignment. Making AI systems learn human-like representations of the world has many known benefits, including improving generalization, robustness to domain shifts, and few-shot learning performance. We propose that this kind of representational alignment between machine learning (ML) models and humans …

abstract agents ai agents ai systems alignment arxiv behavior build cs.ai cs.lg harm human human-like humans learn making standards systems type value values world

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Research Scientist

@ Meta | Menlo Park, CA

Principal Data Scientist

@ Mastercard | O'Fallon, Missouri (Main Campus)