Feb. 13, 2024, 5:44 a.m. | Zonghan Yang An Liu Zijun Liu Kaiming Liu Fangzhou Xiong Yile Wang Zeyuan Yang Qingyuan Hu

cs.LG updates on arXiv.org arxiv.org

The rapid progress of foundation models has led to the prosperity of autonomous agents, which leverage the universal capabilities of foundation models to conduct reasoning, decision-making, and environmental interaction. However, the efficacy of agents remains limited when operating in intricate, realistic environments. In this work, we introduce the principles of $\mathbf{U}$nified $\mathbf{A}$lignment for $\mathbf{A}$gents ($\mathbf{UA}^2$), which advocate for the simultaneous alignment of agents with human intentions, environmental dynamics, and self-constraints such as the limitation of monetary budgets. From the perspective …

agents alignment autonomous autonomous agents capabilities cs.ai cs.cl cs.lg decision environment environmental environments foundation humans making progress reasoning work

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote