March 14, 2024, 4:43 a.m. | Andrea Wynn, Ilia Sucholutsky, Thomas L. Griffiths

cs.LG updates on arXiv.org arxiv.org

arXiv:2312.14106v2 Announce Type: replace-cross
Abstract: How can we build AI systems that are aligned with human values to avoid causing harm or violating societal standards for acceptable behavior? We argue that representational alignment between humans and AI agents facilitates value alignment. Making AI systems learn human-like representations of the world has many known benefits, including improving generalization, robustness to domain shifts, and few-shot learning performance. We propose that this kind of representational alignment between machine learning (ML) models and humans …

abstract agents ai agents ai systems alignment arxiv behavior build cs.ai cs.lg harm human human-like humans learn making standards systems type value values world

Software Engineer for AI Training Data (School Specific)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Python)

@ G2i Inc | Remote

Software Engineer for AI Training Data (Tier 2)

@ G2i Inc | Remote

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US