April 23, 2024, 4:49 a.m. | Edward Y. Chang

cs.CL updates on arXiv.org arxiv.org

arXiv:2404.13071v1 Announce Type: new
Abstract: This paper explores the integration of human-like emotions and ethical considerations into Large Language Models (LLMs). We first model eight fundamental human emotions, presented as opposing pairs, and employ collaborative LLMs to reinterpret and express these emotions across a spectrum of intensity. Our focus extends to embedding a latent ethical dimension within LLMs, guided by a novel self-supervised learning algorithm with human feedback (SSHF). This approach enables LLMs to perform self-evaluations and adjustments concerning ethical …

abstract arxiv collaborative cs.ai cs.cl embedding emotions ethical ethical considerations ethics express focus fundamental human human emotions human-like integration intensity language language models large language large language models llms modeling paper spectrum type

Data Engineer

@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania

Artificial Intelligence – Bioinformatic Expert

@ University of Texas Medical Branch | Galveston, TX

Lead Developer (AI)

@ Cere Network | San Francisco, US

Research Engineer

@ Allora Labs | Remote

Ecosystem Manager

@ Allora Labs | Remote

Founding AI Engineer, Agents

@ Occam AI | New York