all AI news
Dynamics of Moral Behavior in Heterogeneous Populations of Learning Agents
March 8, 2024, 5:42 a.m. | Elizaveta Tennant, Stephen Hailes, Mirco Musolesi
cs.LG updates on arXiv.org arxiv.org
Abstract: Growing concerns about safety and alignment of AI systems highlight the importance of embedding moral capabilities in artificial agents. A promising solution is the use of learning from experience, i.e., Reinforcement Learning. In multi-agent (social) environments, complex population-level phenomena may emerge from interactions between individual learning agents. Many of the existing studies rely on simulated social dilemma environments to study the interactions of independent learning agents. However, they tend to ignore the moral heterogeneity that …
abstract agent agents ai systems alignment artificial arxiv behavior capabilities concerns cs.ai cs.cy cs.lg cs.ma dynamics embedding environments experience highlight importance interactions multi-agent population reinforcement reinforcement learning safety social solution systems type
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Senior Data Science Analyst- ML/DL/LLM
@ Mayo Clinic | Jacksonville, FL, United States
Machine Learning Research Scientist, Robustness and Uncertainty
@ Nuro, Inc. | Mountain View, California (HQ)