Sept. 2, 2022, 1:12 a.m. | Richard Ngo

cs.LG updates on arXiv.org arxiv.org

Within the coming decades, artificial general intelligence (AGI) may surpass
human capabilities at a wide range of important tasks. This report makes a case
for why, without substantial action to prevent it, AGIs will likely use their
intelligence to pursue goals which are very undesirable (in other words,
misaligned) from a human perspective, with potentially catastrophic
consequences. The report aims to cover the key arguments motivating concern
about the alignment problem in a way that's as succinct, concrete and
technically-grounded …

alignment arxiv deep learning learning perspective

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US