April 23, 2024, 4:49 a.m. | Edward Y. Chang

cs.CL updates on arXiv.org arxiv.org

arXiv:2404.13071v1 Announce Type: new
Abstract: This paper explores the integration of human-like emotions and ethical considerations into Large Language Models (LLMs). We first model eight fundamental human emotions, presented as opposing pairs, and employ collaborative LLMs to reinterpret and express these emotions across a spectrum of intensity. Our focus extends to embedding a latent ethical dimension within LLMs, guided by a novel self-supervised learning algorithm with human feedback (SSHF). This approach enables LLMs to perform self-evaluations and adjustments concerning ethical …

abstract arxiv collaborative cs.ai cs.cl embedding emotions ethical ethical considerations ethics express focus fundamental human human emotions human-like integration intensity language language models large language large language models llms modeling paper spectrum type

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Lead Data Modeler

@ Sherwin-Williams | Cleveland, OH, United States