all AI news
When to Make Exceptions: Exploring Language Models as Accounts of Human Moral Judgment. (arXiv:2210.01478v2 [cs.CL] UPDATED)
Oct. 19, 2022, 1:13 a.m. | Zhijing Jin, Sydney Levine, Fernando Gonzalez, Ojasv Kamal, Maarten Sap, Mrinmaya Sachan, Rada Mihalcea, Josh Tenenbaum, Bernhard Schölkopf
cs.LG updates on arXiv.org arxiv.org
AI systems are becoming increasingly intertwined with human life. In order to
effectively collaborate with humans and ensure safety, AI systems need to be
able to understand, interpret and predict human moral judgments and decisions.
Human moral judgments are often guided by rules, but not always. A central
challenge for AI safety is capturing the flexibility of the human moral mind --
the ability to determine when a rule should be broken, especially in novel or
unusual situations. In this …
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Founding AI Engineer, Agents
@ Occam AI | New York
AI Engineer Intern, Agents
@ Occam AI | US
AI Research Scientist
@ Vara | Berlin, Germany and Remote
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Sr. Software Development Manager, AWS Neuron Machine Learning Distributed Training
@ Amazon.com | Cupertino, California, USA