all AI news
Uncovering Latent Human Wellbeing in Language Model Embeddings
Feb. 20, 2024, 5:43 a.m. | Pedro Freire, ChengCheng Tan, Adam Gleave, Dan Hendrycks, Scott Emmons
cs.LG updates on arXiv.org arxiv.org
Abstract: Do language models implicitly learn a concept of human wellbeing? We explore this through the ETHICS Utilitarianism task, assessing if scaling enhances pretrained models' representations. Our initial finding reveals that, without any prompt engineering or finetuning, the leading principal component from OpenAI's text-embedding-ada-002 achieves 73.9% accuracy. This closely matches the 74.6% of BERT-large finetuned on the entire ETHICS dataset, suggesting pretraining conveys some understanding about human wellbeing. Next, we consider four language model families, observing …
abstract accuracy ada arxiv concept cs.ai cs.cl cs.lg embedding embeddings engineering ethics explore finetuning human language language model language models learn openai pretrained models prompt scaling text through type wellbeing
More from arxiv.org / cs.LG updates on arXiv.org
Jobs in AI, ML, Big Data
Software Engineer for AI Training Data (School Specific)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Python)
@ G2i Inc | Remote
Software Engineer for AI Training Data (Tier 2)
@ G2i Inc | Remote
Data Engineer
@ Lemon.io | Remote: Europe, LATAM, Canada, UK, Asia, Oceania
Artificial Intelligence – Bioinformatic Expert
@ University of Texas Medical Branch | Galveston, TX
Lead Developer (AI)
@ Cere Network | San Francisco, US