April 22, 2024, 4:46 a.m. | Pablo Biedma, Xiaoyuan Yi, Linus Huang, Maosong Sun, Xing Xie

cs.CL updates on arXiv.org arxiv.org

arXiv:2404.12744v1 Announce Type: new
Abstract: Recent advancements in Large Language Models (LLMs) have revolutionized the AI field but also pose potential safety and ethical risks. Deciphering LLMs' embedded values becomes crucial for assessing and mitigating their risks. Despite extensive investigation into LLMs' values, previous studies heavily rely on human-oriented value systems in social sciences. Then, a natural question arises: Do LLMs possess unique values beyond those of humans? Delving into it, this work proposes a novel framework, ValueLex, to reconstruct …

abstract arxiv beyond cs.ai cs.cl embedded ethical human investigation language language models large language large language models llms risks safety studies through type unique values

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US