Aug. 11, 2022, 1:11 a.m. | Babak Hemmatian, Lav R. Varshney

cs.CL updates on arXiv.org arxiv.org

Recent work demonstrates a bias in the GPT-3 model towards generating violent
text completions when prompted about Muslims, compared with Christians and
Hindus. Two pre-registered replication attempts, one exact and one approximate,
found only the weakest bias in the more recent Instruct Series version of
GPT-3, fine-tuned to eliminate biased and toxic outputs. Few violent
completions were observed. Additional pre-registered experiments, however,
showed that using common names associated with the religions in prompts yields
a highly significant increase in violent …

arxiv language language models large language models

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne

Senior Machine Learning Engineer (MLOps)

@ Promaton | Remote, Europe

Technology Consultant Master Data Management (w/m/d)

@ SAP | Walldorf, DE, 69190

Research Engineer, Computer Vision, Google Research

@ Google | Nairobi, Kenya