Feb. 20, 2024, 5:43 a.m. | Pedro Freire, ChengCheng Tan, Adam Gleave, Dan Hendrycks, Scott Emmons

cs.LG updates on arXiv.org arxiv.org

arXiv:2402.11777v1 Announce Type: cross
Abstract: Do language models implicitly learn a concept of human wellbeing? We explore this through the ETHICS Utilitarianism task, assessing if scaling enhances pretrained models' representations. Our initial finding reveals that, without any prompt engineering or finetuning, the leading principal component from OpenAI's text-embedding-ada-002 achieves 73.9% accuracy. This closely matches the 74.6% of BERT-large finetuned on the entire ETHICS dataset, suggesting pretraining conveys some understanding about human wellbeing. Next, we consider four language model families, observing …

abstract accuracy ada arxiv concept cs.ai cs.cl cs.lg embedding embeddings engineering ethics explore finetuning human language language model language models learn openai pretrained models prompt scaling text through type wellbeing

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Business Data Scientist, gTech Ads

@ Google | Mexico City, CDMX, Mexico

Lead, Data Analytics Operations

@ Zocdoc | Pune, Maharashtra, India