April 22, 2024, 4:46 a.m. | Pablo Biedma, Xiaoyuan Yi, Linus Huang, Maosong Sun, Xing Xie

cs.CL updates on arXiv.org arxiv.org

arXiv:2404.12744v1 Announce Type: new
Abstract: Recent advancements in Large Language Models (LLMs) have revolutionized the AI field but also pose potential safety and ethical risks. Deciphering LLMs' embedded values becomes crucial for assessing and mitigating their risks. Despite extensive investigation into LLMs' values, previous studies heavily rely on human-oriented value systems in social sciences. Then, a natural question arises: Do LLMs possess unique values beyond those of humans? Delving into it, this work proposes a novel framework, ValueLex, to reconstruct …

abstract arxiv beyond cs.ai cs.cl embedded ethical human investigation language language models large language large language models llms risks safety studies through type unique values

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Data Science Analyst

@ Mayo Clinic | AZ, United States

Sr. Data Scientist (Network Engineering)

@ SpaceX | Redmond, WA