all AI news
Safe Deep RL in 3D Environments using Human Feedback. (arXiv:2201.08102v1 [cs.LG])
Jan. 21, 2022, 2:10 a.m. | Matthew Rahtz, Vikrant Varma, Ramana Kumar, Zachary Kenton, Shane Legg, Jan Leike
cs.LG updates on arXiv.org arxiv.org
Agents should avoid unsafe behaviour during both training and deployment.
This typically requires a simulator and a procedural specification of unsafe
behaviour. Unfortunately, a simulator is not always available, and procedurally
specifying constraints can be difficult or impossible for many real-world
tasks. A recently introduced technique, ReQueST, aims to solve this problem by
learning a neural simulator of the environment from safe human trajectories,
then using the learned simulator to efficiently learn a reward model from human
feedback. However, it …
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Senior ML Researcher - 3D Geometry Processing | 3D Shape Generation | 3D Mesh Data
@ Promaton | Europe
Senior AI Engineer, EdTech (Remote)
@ Lightci | Toronto, Ontario
Data Scientist for Salesforce Applications
@ ManTech | 781G - Customer Site,San Antonio,TX
AI Research Scientist
@ Gridmatic | Cupertino, CA
Data Engineer
@ Global Atlantic Financial Group | Boston, Massachusetts, United States
Machine Learning Engineer - Conversation AI
@ DoorDash | Sunnyvale, CA; San Francisco, CA; Seattle, WA; Los Angeles, CA